DFC DfOperations Classes – Part 3

In this post I will show you how to take the general implementation of the DfOperation class described in the last post, and turn it into a concrete implementation for the Checkout operation.

I recently discovered that DfOperation code written on a 32-bit Windows machine would not run on a 64-bit Windows machine.  The operation classes make heavy use of the Registry, and the 32-bit Registry code does not run properly on a 64-bit machine.  The simple solution was to tell the DFC to use a file-based Registry instead of the system Registry.  Add this line to your dfc.properties file and you will be fine.

dfc.registry.mode=file

Checkout Operation

The Checkout operation will lock and download content for all sysobjects passed to it. It also creates registry entries (so they can be checked in or cancelled), and will patch XML files if needed.


private void doCheckoutOp(ArrayList objList, String checkoutDir) {

try {

  // #1 - manufacture a specific operation
  IDfCheckoutOperation checkoutOpObj = cx.getCheckoutOperation();

  // #2 - add objects to the operation for processing
  for (IDfSysObject sObj : objList) {
    checkoutOpObj.add(sObj);
  }

// #3 - set operation params
  checkoutOpObj.setDestinationDirectory(checkoutDir);
  checkoutOpObj.setDownloadContent(true);

  // #4 - execute the operation
  boolean result = checkoutOpObj.execute();

  // #5 - check for errors
  if (!result) {
    IDfList errors = checkoutOpObj.getErrors();
    for (int i=0; i<errors.getCount(); i++) {
      IDfOperationError err = (IDfOperationError) errors.get(i);
      System.out.println("Error in Checkout operation: " + err.getErrorCode() + " - " + err.getMessage());
    }
  } else {

    // #6 - get new obj ids
    IDfList newObjs = checkoutOpObj.getObjects();
    for (int i=0; i<newObjs.getCount(); i++) {
      IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
      System.out.println("\tchecked out " + sObj.getObjectId().toString());
    }

    // #7 - open checked out files
    IDfList checkedOutNodes = checkoutOpObj.getRootNodes();
    for (int i=0; i<checkedOutNodes.getCount(); i++) {
      IDfCheckoutNode  nodeObj = (IDfCheckoutNode) checkedOutNodes.get(i);
      String path = nodeObj.getFilePath();
      if (path != null && path.length() > 0) {
        Runtime.getRuntime().exec("rundll32 SHELL32.DLL,ShellExec_RunDLL " + nodeObj.getFilePath());
      }
    }
  }

  } catch(Exception e) {
    System.out.println("Exception in Checkout operation: " + e.getMessage());
    e.printStackTrace();
  }
}

Most of the details in this code were covered in the previous post, but there are a few areas specific to the Checkout operation I want to point out.

  • #3 – sets two operation parameters that are specific to the DfCheckout operation. The setDestinationDirectory parameter sets the location for the content files to be downloaded to, and the setDownloadContent parameter tells the operation to download the content files. It is possible to checkout files and not download the content by setting this parameter to false.
  • #6 – simply gets a list of all the objects that were checked out.
  • #7 – if you want to manipulate the objects that were checked out, use the getRootNodes() method. This method will get each checked out object as a IDfCheckoutNode object that includes information such as where the object’s content was checked out. The next few lines of code demonstrate how to automatically have Windows open the checked out files.

Next post we’ll take a look at checking these objects back in and cancelling the checkout operation.

Advertisements

DFC DfOperations Classes – Part 2

Before I get into specific implementations of DfOperation classes (next post), I want to give you a general overview of how operations are implemented.  Each operation class contains methods and attributes specific to its particular operation (e.g., checkout is different from move). However, they also share a lot of commonality (inherited from the DfOperation class).  Thus, the invocation of each operation class is basically the same:

  • Instantiate the class – instantiate an interface class for the operation you want to implement.  Operation classes are manufactured from the DfClientX factory classes (e.g., DfClientX.getXXXOperation() where XXX denotes a specific operation name).
  • Populate the operation class – populate the operation with the necessary objects, properties, and execution options.
  • Execute – run the operation.
  • Check for errors – check for errors that occurred during the execution of the operation.  Because operations can be run on multiple objects and not all objects might fail, errors are caught and handled internally as opposed to being thrown to the caller.  Errors should be checked and processed accordingly.
  • Process results – each operation returns different results that might require additional processing.  For example, the Checkin operation returns object ids for newly created objects.

The following pseudocode demonstrates the generic setup and execution of a DfOperation.  Note the use of XXX where specific operation names should be used.


try {
   // #1 - create a clientX factory object
   IDfClientX cx = new DfClientX();

   // #2 - manufacture a specific operation
   IDfXXXOperation opObj = cx.getXXXOperation();

   // #3 - add an object to the operation
  opObj.add(sObj);

   // #4 - execute the operation
   boolean result = opObj.execute();

   // #5 - check for errors
  if (!result) {
    IDfList errors = OpObj.getErrors();
    for (int i=0; i<errors.getCount(); i++) {
      IDfOperationError err = (IDfOperationError) errors.get(i);
      System.out.println("Error in operation: " + err.getErrorCode() + " - " + err.getMessage());
    }
  } else {

    // #6 - get new obj ids
    IDfList newObjs = checkoutOpObj.getNewObjects();
    for (int i=0; i<newObjs.getCount(); i++) {
      IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
      System.out.println("\tnew object is " + sObj.getObjectId().toString());
  }
// #7 - exceptions
} catch(Exception e) {
    System.out.println("Exception in operation: " + e.getMessage());
    e.printStackTrace();
}

  1. Get an IDfClientX object.
  2. Get the specific operation object from the IDfClientX class.  The XXX represent the name of a real operation (found here).
  3. Add an object to operate on. In this example, I assume sObj (an IDfSysObject) was passed to this method. The object itself could represent a document, a folder, a virtual document, or an XML document depending upon the operation. The add() method must be called for each individual object, so if you pass this method an IDfList of objects, loop through them and add each one individually. The exception to this rule is if you add the root of a virtual document (as an IDfVirtualDocument), the add() method is smart enough to add all of its children also. The same is true for an XML document. Notice that the add() method wants an actual IDfSysObject and not an IDfID or String.
  4. Execute the operation.
  5. If an error occurred, the result of the execute() method will be false.  Errors are contained in an IDfList object. Remember that operations do not throw exceptions except for fatal errors.  All other exceptions are caught internally and stored as IDfOperationError objects in the IDfOperation object itself.  An error while processing one object in the operation does not necessarily terminate the operation for all the remaining objects.
  6. If the operation created new objects in the repository (e.g., checkin or copy), these objects are also stored in the operation object.  If the operation did not create new objects (e.g., move or delete), the method call is getObjects() (as opposed to getNewObjects() above) and returns the objects that the operation processed.
  7. Catch any fatal operation errors just in case.

Next week I’ll show you a sample implementation for the Checkout operation.

DFC DfOperations Classes – Part 1

The operation classes (DfOperation) in the DFC offer huge benefits to developers (and ultimately end users), but seem to get little use or notice. Nearly every application I have supported in the past few years has contained custom implementations of basic library functions (e.g., checkin, checkout, etc.). How many of you have written code to implement one or more of these functions? Perhaps you have a library of these functions that you have written and hardened over time and now tote around with you from project to project. Or worse, rewrite these functions for every project. I know, we’re all guilty of doing it.

However, there is a better way. Documentum, since the beginning of the DFC, has provided the DfOperation classes to implement all of these core library functions:

Library Function Operation Class Purpose
Cancel Checkout IDfCancelCheckoutOperation Releases locks on checked out objects and cleans up local resources allocated to them.
Checkin IDfCheckinOperation Checks in new content, creates necessary versions, releases locks and cleans up locally allocate resources.
Checkout IDfCheckoutOperation Locks the object and exports its content for editing.
Copy IDfCopyOperation Copies objects to other locations in the repository, including deep folder structures and virtual documents.
Delete IDfDeleteOperation Deletes objects from the repository, including deep folder structures and virtual documents..
Export IDfExportOperation Exports content from the repository.
Import IDfImportOperation Imports content from the repository.
Move IDfMoveOperation Moves objects in the repository, including deep folder structures and virtual documents..
Transform IDfTransformOperation Perform an XSL transformation on XML content.
Validation IDfValidationOperation Validate XML documents against an XML schema.

Note: there are no operations for creating or viewing objects.

The advantage to using these operation classes over your own are numerous. Here are a few:

  • Take advantage of the years of thought and testing Documentum has invested in these classes. Documentum uses these classes  internally in its applications (e.g., WDK) so you can be confident they are solid.
  • Insulate your code against underlying changes to Documentum and the DFC. Since Documentum uses these classes internally, any such changes will be adapted by these classes.
  • You can do more with less code. When you see what these classes can do and how they can be used, you’ll wish you had been using them all along.
  • The classes are full featured and provide a consistent methodology for handling errors and even rolling back aborted operations.
  • The classes are all XML and virtual doucment-aware in case you are dealing with XML content or virtual documents.
  • The classes can operate on objects distributed across multiple repositories with no additional work or code.
  • These classes are naturally ACS and BOCS-aware.

In the following few posts I will dig into the DfOperation classes and show you how to use them, demonstrate their advantage over custom code, and hopefully convince you of their utility. In the next few weeks, look for these topics:

  • basic use of DfOperation classes;
  • examples of checkout, checkin, and cancel checkout operation classes;
  • examples of copy, move, delete operation classes;
  • how to handle errors and aborted operations;
  • advanced topics like running operation steps  and using operation monitors.

UNC Mapping and the DFC

I have a DFC/Documentum client application written in Java that resides on a shared network drive.  Users launch the application by clicking a shortcut to a batch file located on the shared drive.  The batch file sets the class path and makes other prerequisite checks before launching Java and loading the executable JAR file.  This arrangement works great as long as the user has mapped the shared network drive to the proper drive letter on their workstation (e.g., P:).  I converted all of the hardcoded drive letters to UNC nomenclature to alleviate the user from having to have the network drive mapped (some didn’t, and some didn’t know the correct location to map to).  When I did this, the DFC broke with the following error:

[DFC_SECURITY_IDENTITY_INIT] no identity initialization or incomplete identity initialization DfException:: THREAD: pool-1-thread-1; MSG: win remote files not supported, \\server\share$\dir1\dir2\dir3\dfc.keystore; ERRORCODE: ff; NEXT: null

Interesting, eh?

The fix turned out to be fairly simple using the DOS command, subst.  In the batch file I added a few lines like the following, and the DFC was happy.

REM map p: drive to the location of the Java app.
REM This is necessary because the DFC cannot use a UNC
REM mapped path. The subst command tricks it to think the
REM p: drive is attached.

REM delete any previous subst path assigned to p:
subst p: /D

REM assign path to p:
subst p: \\server\share$\dir1\dir2\dir3

REM run app
p:\jre\bin\java -classpath p:\lib; -jar p:\JavaApp.jar

REM remove the subst drive from p:
subst p: /D

You could also use the net use command instead of subst if you like.  Both net use and Windows Explorer ultimately use the subst command to mount drives.  Note that some environments may prohibit normal user profiles from executing subst and net use commands.  YMMV.

DFC Code to Automatically Build Folder Paths

If there is one piece of code I have rewritten more times than the login routine, it is that which will create nested folders in the repository given a fully qualified path.  I have implemented this code in WDK, TBOs, SBOs, Captiva, jobs, and migration code.  For example, suppose you have a process that ingests news feeds and stores them according to wire service, year, month, and day.  Your storage structure in the Docbase might look like this:

  • /News/wire service/AP/2012/Jan/01
  • /News/wire service/AP/2012/Jan/02 …
  • /News/wire service/Reuters/2012/Feb/14 …

Get the idea?  Each service’s stories are stored in a unique folder according to the day they were recieved.

If an automated process is recieving, classifying, and storing these stories it would need to be able to create new folders based upon the date or wire service.  Usually this information is readily available from the process ingesting the content.  So, it would be nice to simply construct the desired storage path as a String, create the necessary elements of the path, and link the newly ingested content to the proper folder.  The code for the method dmCreateStoragePath(IDfSession session, String path) below does just that.

    public IDfFolder dmCreateStoragePath(IDfSession session, String path) throws Exception {
        IDfFolder folder = null;

        // first see if the folder already exists
        folder = (IDfFolder) session.getObjectByQualification("dm_folder where any r_folder_path='" + path + "'");

        // if not build it
        if (null == folder) {
            // split path into separate folders
            String[] dirs = path.split("/");

            // loop through path folders and build
            String dm_path = "";
            for (int i=0; i<dirs.length; i++) {

                if (dirs[i].length() > 0) {

                    // build up path
                    dm_path = dm_path + "/" + dirs[i];

                    // see if this path exists
                    IDfFolder testFolder = (IDfFolder) session.getObjectByQualification("dm_folder where any r_folder_path='" + dm_path + "'");
                    if (null == testFolder) {

                        // check if a cabinet need to be made
                        if (dm_path.equalsIgnoreCase("/" + dirs[i])) {
                            IDfFolder cab = (IDfFolder) session.newObject("dm_cabinet");
                            cab.setObjectName(dirs[i]);
                            cab.save();
                         // else make a folder
                         } else {
                             folder = (IDfFolder) session.newObject("dm_folder");
                             folder.setObjectName(dirs[i]);

                             // link it to parent
                             String parent_path = "";
                             for (int j=0; j < i; j++) {
                                if (dirs[j].length() > 0) {
                                    parent_path = parent_path + "/" + dirs[j];
                                }
                             }
                         folder.link(parent_path);
                         folder.save();
                        }
                    }
                 }
            }
        }
        return folder;
    }

To use this code in your ingestion program, you can simply do this:

  IDfSysObject newStory = (IDfSysObject) session.newObject("dm_document");
  // do some stuff with newStory
  IDfFolder newFolder = dmCreateStoragePath(session, "/News/wire service/Fox News/2012/May/07");
  newStory.link(newFolder.getObjectId());
  newStory.save();

Where the path, “/News/wire service/Fox News/2012/May/07” is build dynamically from metadata. Any or none of the components of this path may exist. The dmCreateStoragePath() method builds what is necessary. Note that this code does not make any accomodation for ACLs placed on the created cabinet or folders, but could easily be modified to do so.

%d bloggers like this: