The Basics – Running a Query

Next to logging on/off, executing DQL queries is one of the most frequent tasks developed in a Documentum application.  There are three primary classes used to query the repository and process the results. These classes are:

  • IDfQuery – The query class encapsulates all of the data and functionality necessary to run a DQL query against a repository.
  • IDfCollection – The collection class encapsulates the results of a query in a read-once, forward-only object.  A better alternative is to use the dmRecordSet class.
  • IDfTypedObject – A typed object is a non-persistent object used to model row data in the IDfCollection object.

There are potentially more classes involved depending upon circumstances, for example whether you choose to use a query builder class (IDfQueryBuilder) to construct your query, or the dmRecordSet to hold query results.

The basic structure of the query code looks like this:


public IDfCollection runQuery(String query, IDfSession session) {
  IDfCollection col = null;

  try {
    // create query object
    IDfQuery q = new DfQuery();

    // set query string
    q.setDQL(query);

    // execute query
    col = q.execute(session, DfQuery.DF_READ_QUERY);

  } catch (DfException e) {
    e.printStackTrace();
  }
  return col;
}

Note the use of the DfQuery.DF_READ_QUERY constant. The use of the proper query type constant can affect your query performance. See here for more on query type constants.

The basic structure of the code used to process the results returned in the IDfCollection object is:


// do query
IDfCollection col = runQuery("select r_object_id, object_name from dm_document where folder('/Templates')", session);

// process results
while (col.next()) {

 // get each row
 IDfTypedObject tObj = col.getTypedObject();

 // get value in each column and do something with results
 String id = tObj.getString("r_object_id");
 String name =  tObj.getString("object_name");
 System.out.println(id + "\t" + name);
}

// it is very important to close each collection after you process it
if ( (col != null) && (col.getState() != IDfCollection.DF_CLOSED_STATE) )
 col.close();

In this example, the IDfTypedObject represents a row in the IDfCollection and its getter methods are used to retrieve each column’s data.  Note that an IDfCollection object can only be read once, and only in the forward direction.  For a more robust and capable query result object, use the dmRecordSet class.

UPDATE:  I have created a DCTMBasics JAR file that contains most of the code discussed in this series.

The Basics – Logging On/Off

Establishing a session with the Content Server is one of the most basic and first operations a Documentum programmer must master.  There are three primary classes involved in establishing a session with the Content Server:

  • IDfSessionManager – The session manager manages identities, pooled sessions and transactions.  It is also required to release a session (i.e., log off ) when you are through with it.
  • IDfLoginInfo – The login info object encapsulates the information needed to validate and login a user to a repository. It stores the user’s credential information in addition to options that affect the server login process.
  • IDfSession – The session object encapsulates a session with a Documentum repository.

Here are my logon and logoff methods:


public static IDfSession logon(String docbase,
                               String username,
                               String password) throws DfException {
    IDfSession session = null;

    // validate arguments
    if ((docbase == null) || (docbase.trim().isEmpty()) )
      throw new DfException ("Docbase name is null or blank.");

    if ((username == null) || (username.trim().isEmpty()) )
      throw new DfException ("Username name is null or blank.");

    if ((password == null) || (password.trim().isEmpty()) )
      throw new DfException ("Password is null or blank.");

    // create login info
    IDfLoginInfo li = new DfLoginInfo();
    li.setUser(username);
    li.setPassword(password);
    li.setDomain("");

    // get session manager
    IDfSessionManager sessionMgr = DfClient.getLocalClient().newSessionManager();

    // login
    if (sessionMgr != null) {
      sessionMgr.setIdentity(docbase, li);
      session = sessionMgr.getSession(docbase);
     } else {
       throw new DfException("Could not create Session Manager.");
     }
     return session;
}

public static void logoff(IDfSession session) {
  // release session when done
  if (session != null) {
    session.getSessionManager().release(session);
  }
}

Here is an example of using the logon() and logoff() methods in a program:


public static void main(String[] args) {
  private IDfSession session = null;

  try {
    System.out.println("Logging on...");
    session = login("repo1", "dmadmin", "dmadmin");
    if (session != null) {
      System.out.println("Success: session ID = " + session.getSessionId());

      // do stuff here

      // release session when done
      System.out.println("Logging off...");
      logoff(session);

    } else {
      System.out.println("Logon failed: Session is null");
    }

  } catch (DfException dfe) {
    System.out.print("Logon failed:  ");
    System.out.println(dfe.getMessage());
  }

}

UPDATE:  I have created a DCTMBasics JAR file that contains most of the code discussed in this series.

“The Basics” Series

I have been kicking around an idea to run a recurring series of posts demonstrating basic Documentum DFC programming techniques.  The idea will be to demonstrate techniques to implement DFC operations programmers routinely implement, like logging on/off, running queries, creating jobs, checking out objects, etc.  in short, concise, and best practice code snippets.  These are operations that everyone has (re)written a hundred times and should be kept in a library JAR file instead of recreated every time they are needed.  Having said that, perhaps you have a library JAR of operations you would like to share?

If you are new to Documentum and looking for DFC best practices and code examples, this series will be for you.  To find all of the posts in this series, search the tag library for “The Basics”.  Look for the first post of this series soon, with others to follow at random intervals.

There have been several “The Basics” topics previously covered in this blog already, in addition to some that are not-so-basic.  The following topics may only be touched upon in this series since they have been covered in depth previously:

UPDATE:  I have created a DCTMBasics JAR file that contains most of the code discussed in this series.

DfOperations Sample Code

In case you missed it — it was buried at the end of the last post of the DfOperations Class series — source code for all of the examples discussed in the posts is here.

DFC DfOperations Classes – Part 8

In this final post on DfOperation classes, I will touch on a few advanced topics.

IDfOperation Steps

If you want a little more control over the execution of an operation, you can execute each operation one step at a time and check for errors along the way.  To execute operations step-wise, replace the DfOperation.execute() method call with the following code snippet.

IDfList steps = OpObj.getSteps();
int stepCount = steps.getCount();
boolean result = true;
for(int i = 0; i < stepCount; i++) {
  IDfOperationStep step = (IDfOperationStep) steps.get(i);
  System.out.println("\t\texecuting step " + i + ". - " + step.getName());
  boolean stepResult = step.execute();

  if (!stepResult)
    result = false;
}

The result of this code for the DfCopyOperation is:

executing step 0. – copy_post_population
executing step 1. – copy_pre_processing
executing step 2. – copy_object_processing
executing step 3. – copy_container_processing
executing step 4. – copy_post_processing
executing step 5. – copy_cleanup

Now you are acquainted with the actual steps of the DfCopyOperation.

Operations Monitor

A cool thing you can do with all of the DfOperation classes is attach a monitor to them.  This is, for example, how Webtop displays the progress bar while objects are being copied/moved/imported/exported/deleted. Unfortunately, the DFC DfOperationMonitor class does not offer much to work with (the Webtop operations monitor class is much better). Here is an example of how to use monitoring in a DfOperation and an example of a class to monitor progress.

To enable monitoring, simply set the operation monitor to a DfOperationMonitor class, like this:

// setup monitor
CopyMonitor cm = new CopyMonitor();
copyOpObj.setOperationMonitor(cm);

Here is the CopyMonitor class:

	private static class CopyMonitor extends DfOperationMonitor implements IDfOperationMonitor {

		@Override
		public int getYesNoAnswer(IDfOperationError arg0) throws DfException {
			System.out.println("[ERROR: " + arg0.getMessage() + "] - continuing");
			return IDfOperationMonitor.YES;
		}

		@Override
		public int progressReport(IDfOperation opObj, int opPercentDone,
				IDfOperationStep opStepObj, int stepPercentDone, IDfOperationNode opNodeObj)
				throws DfException {

			IDfProperties props = opNodeObj.getPersistentProperties();
			String objName = props.getString("object_name");
			String objType = props.getString("r_object_type");

			System.out.println("[MONITOR: operation=" + opObj.getName() +
					           " operation%=" + opPercentDone +
					           " step=" + opStepObj.getName() +
					           " step%=" + stepPercentDone +
					           " object=" + objName + " (" + objType + ")");

			return IDfOperationMonitor.CONTINUE;
		}

		@Override
		public int reportError(IDfOperationError arg0) throws DfException {
			System.out.println("[ERROR: " + arg0.getMessage() + "] - aborting");
			return IDfOperationMonitor.ABORT;
		}
	}

The result of this code for the DfCopyOperation is a follows:

[MONITOR: operation=Copy operation%=4 step=copy_pre_processing step%=25 object=Nested (dm_folder)
[MONITOR: operation=Copy operation%=6 step=copy_pre_processing step%=37 object=Nested (dm_folder)
[MONITOR: operation=Copy operation%=8 step=copy_pre_processing step%=50 object=Document5 (dm_document)
[MONITOR: operation=Copy operation%=10 step=copy_pre_processing step%=62 object=Document1 (dm_document)
[MONITOR: operation=Copy operation%=12 step=copy_pre_processing step%=75 object=Document2 (dm_document)
[MONITOR: operation=Copy operation%=14 step=copy_pre_processing step%=87 object=Document3 (dm_document)
[MONITOR: operation=Copy operation%=16 step=copy_pre_processing step%=100 object=Document4 (dm_document)
[MONITOR: operation=Copy operation%=18 step=copy_pre_processing step%=112 object=VirtualDoc (dm_document)
[MONITOR: operation=Copy operation%=4 step=copy_object_processing step%=25 object=Document5 (dm_document)
[MONITOR: operation=Copy operation%=6 step=copy_object_processing step%=37 object=Document1 (dm_document)
. . .

I suspect this is not the output you expected from the CopyMonitor class. Look how the operation and step complete percents jump around. I included a better monitor class with the code archive mentioned at the end of this post.

Aborted Operations

Most operations can be rolled back after they complete, but before the operation object is destroyed, or if an error occurs during processing. This is a really handy feature to help clean up after an error, but also to implement the notion of “cancelling” an operation. The code below augments the error checking we have used previously to implement an abort() and roll back the operation.

// check for errors
if (!result) {
  IDfList errors = copyOpObj.getErrors();
    for (int i = 0; i < errors.getCount(); i++) {
      IDfOperationError err = (IDfOperationError) errors.get(i);
      System.out.println("Error in Copy with Abort operation: " + err.getErrorCode() + " - " + err.getMessage());
    }

    // process abort
    if (copyOpObj.canUndo()) {
      System.out.println("\t\taborting operation...");
      copyOpObj.abort();
    }
}

That’s all there is to it. The DfOperation class will take care of undoing all of the steps of the operation. Pretty cool, eh?

Wrap Up

In general, I like the DfOperation classes and use them whenever I can. As mentioned previously, there are some great benefits to using DfOperation classes instead of coding these operations yourself.  In addition to the benefits, I hope you have seen how easy they are to implement, and you get the bonus of having built in undo methods and monitoring classes.

As cool and useful as DfOperations are, there are a few shortcomings, in my opinion:

  • You cannot create and insert steps into an operation.  For example, I would like to add a step to the Copy operation so that before the copy is done, the operation checks a value in a registered table.
  • You cannot extend the existing operations.  I would love to extend the DfExportOperation class to do deep folder exports.
  • You cannot write your own DfOperation classes.  I think it would be useful to create some custom operations like synchronizing metadata with an external data source.

Finally, working examples of all of the operations I have presented in this series are available here.

DFC DfOperations Classes – Part 7

In this post we will examine the Import and Export operations. These functions tend to be very common in practice. The respective DfOperations for these functions are similar to those previously discussed with a few exceptions noted below.

Import Operation

The Import operation performs a complete import. It creates necessary objects, cleans up, and patches links to XML files if necessary.


private void doImportOp(ArrayList fileList, IDfId importFolderId) {

  try {

    // #1 - manufacture an operation
    IDfImportOperation importOpObj = cx.getImportOperation();

    // #2 - add objects to the operation for processing
    for (String file : fileList) {
    IDfImportNode node = (IDfImportNode) importOpObj.add(file);
      node.setDocbaseObjectType("dm_document");
      node.setFormat("crtext");
    }

    // #3 - set operation params
    // interesting no ACL specified
    importOpObj.setSession(session);
    importOpObj.setDestinationFolderId(importFolderId);
    importOpObj.setKeepLocalFile(true);

    // #4 - execute the operation
    boolean result = importOpObj.execute();

    // #5 - check for errors
    if (!result) {
      IDfList errors = importOpObj.getErrors();
      for (int i=0; i<errors.getCount(); i++) { 
         IDfOperationError err = (IDfOperationError) errors.get(i);
         System.out.println("Error in Import operation: " + err.getErrorCode() + " - " + err.getMessage());
      }
    } else {

      // #6 - get new obj ids
      IDfList newObjs = importOpObj.getNewObjects();
      for (int i=0; i<newObjs.getCount(); i++) { 
         IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
         System.out.println("\timported " + sObj.getObjectId().toString());
        // set ACL here?
      }
    }

  } catch(Exception e) {
    System.out.println("Exception in Import operation: " + e.getMessage());
    e.printStackTrace();
  }

}

  • #2 – instead of adding sysobjects to the operation’s node tree, for Import, we add strings that represent the files (complete paths) to import.  The add() method creates IDfImportNodes which we must further update to include the object type and format for each file being imported.
  • #3 – the Import operation is the only DfOperation that requires you to explicitly set the session.  The other operation parameters set here are obvious
  • #6 – I find it interesting that there is no accommodation for setting the ACL on the DfImportNode.  I guess after the import is completed ACLs, lifecycles, etc. can be set here. 

Export Operation

The export operation does a reasonable job of exporting content from the Docbase.  For virtual documents and XML documents it will export all of the referenced children of the parent document.  The most obvious drawback to the Export operation is that it does not perform a deep export of a folder tree.  You can add dm_folder objects to the operation’s node tree for export and it will export them, but none of their contents.  It would be nice to extend this DfOperation class to perform deep exports but there are limitations there too.


private void doExportOp(ArrayList objList, String dir) {

  try {

    // #1 - manufacture an operation
    IDfExportOperation exportOpObj = cx.getExportOperation();

    // #2 - add objects to the operation for processing
    for (IDfSysObject sObj : objList) {
      exportOpObj.add(sObj);
    }

    // #3 - set operation params
    exportOpObj.setDestinationDirectory(dir);

    // #4 - execute the operation
    boolean result = exportOpObj.execute();

    // #5 - check for errors
    if (!result) {
    IDfList errors = exportOpObj.getErrors();
      for (int i=0; i<errors.getCount(); i++) {
        IDfOperationError err = (IDfOperationError) errors.get(i);
        System.out.println("Error in Export operation: " + err.getErrorCode() + " - " + err.getMessage());
      }
    } else {

      // #6 - get new obj ids
      IDfList newObjs = exportOpObj.getObjects();
      for (int i=0; i<newObjs.getCount(); i++) {
        IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
          System.out.println("\texported " + sObj.getObjectId().toString());
      }
    }

  } catch(Exception e) {
    System.out.println("Exception in Export operation: " + e.getMessage());
    e.printStackTrace();
  }

}

Really, other than the caveats mentioned above, the Export operation is pretty straighforward. In the next post, I will briefly touch on some advanced topics and wrap up this series on the DfOperations classes.

DFC DfOperations Classes – Part 6

The delete operation is one of my favorites. Perhaps because I use it so often, or that I find people like to write this one themselves (me included). It must be the allure of writing a recursive method call to do deep deletes that entices people to write it. Or maybe it is an old habit held over from the days before the operations classes when we had to implement all of this functionality ourselves. Anyway, whatever the reason, I encourage you to use the DfDeleteOperation class instead.

Delete Operation

The delete operation does everything you expect it to: it destroys objects in the Docbase, and if the object is a folder, virtual document, or XML document, it will destroy all of its substructures as well.


private static void doDeleteOp(ArrayList objList ) {
  try {

    // #1 - manufacture an operation
    IDfDeleteOperation deleteOpObj = cx.getDeleteOperation();

    // #2 - add objects to the operation for processing
    for (IDfSysObject sObj : objList) {
      deleteOpObj.add(sObj);
    }

    // #3 - set op parameters
    deleteOpObj.enableDeepDeleteFolderChildren(true);
    deleteOpObj.enableDeepDeleteVirtualDocumentsInFolders(true);
    deleteOpObj.setDeepFolders(true);

    // #4 - execute the operation
    System.out.println("\tdeleting... ");
    boolean result = deleteOpObj.execute();

    // #5 - check for errors
    if (!result) {
      IDfList errors = deleteOpObj.getErrors();
      for (int i=0; i<errors.getCount(); i++) {  
        IDfOperationError err = (IDfOperationError) errors.get(i);
        System.out.println("Error in Delete operation: " + err.getErrorCode() + " - " + err.getMessage());
      }
    } else {
      IDfList deletedObjs = deleteOpObj.getObjects();
      for (int i=0; i<deletedObjs.getCount(); i++) {
         IDfSysObject sObj = (IDfSysObject) deletedObjs.get(i);
         System.out.println("\tdeleted object " + sObj.getObjectId().toString());
      }
    }

  } catch(Exception e) {
    System.out.println("Exception in Delete operation: " + e.getMessage());
    e.printStackTrace();
  }
}

There is not much remarkable about this code. The most interesting bits take place at #3 where the operation parameters are set. All of these parameters are true by default, but I set them just to highlight their existence. See the DFC Javadocs for specifics on what each parameter does.

The DfDeleteOperation is by far the best and most robust delete operation I have seen. Beyond the operation parameters, you can customize each node (DfDeleteNode) added to the operation’s node tree at #2 to behave differently, such as delete certain versions. I showed you an example of using these operation-specific nodes last week with the Move operation.

In the next post I will present the Import and Export operations.

DFC DfOperations Classes – Part 5

In this post I will show you two related functions: copy and move.

Copy Operation

The copy operation copies a folder, document, virtual document or XML document from its current location, to the location specified. If a folder, virtual document or XML document is added to the operation’s node list, the copy is by default “deep”, meaning it will copy sub-folders and children along with the parent object.


private void doCopyOp(ArrayList objList, IDfFolder toFolder ) {

  try {

    // #1 - manufacture an operation
    IDfCopyOperation copyOpObj = cx.getCopyOperation();

    // #2 - add objects to the operation for processing
    for (IDfSysObject sObj : objList) {
      copyOpObj.add(sObj);
    }

    // #3 - set copy params
    copyOpObj.setCopyPreference(DfCopyOperation.COPY_COPY);
    copyOpObj.setDestinationFolderId(toFolder.getObjectId());

    // #4 - execute the operation
    boolean result = copyOpObj.execute();

    // #5 - check for errors
    if (!result) {
      IDfList errors = copyOpObj.getErrors();
        for (int i=0; i<errors.getCount(); i++) {
          IDfOperationError err = (IDfOperationError) errors.get(i);
          System.out.println("Error in Copy operation: " + err.getErrorCode() + " - " + err.getMessage());
        }
    } else {
      // #6 - get new obj ids
      IDfList newObjs = copyOpObj.getNewObjects();
      for (int i=0; i<newObjs.getCount(); i++) {
        IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
        System.out.println("\tnew object is " + sObj.getObjectId().toString());
        newSysObjs.add(sObj);
      }
    }

  } catch(Exception e) {
    System.out.println("Exception in Copy operation: " + e.getMessage());
    e.printStackTrace();
  }

}

Again, the only thing of real note here is the use of the operation-specific parameters at #3.

  • setCopyPreference takes an integer constant defined in the DfCopyOperation class.  This parameter dictates the kind of copy to perform (make copies of children objects, or reference existing children objects)
  • setDestinationFolderId, which indicates where the copies should be made.  Note this parameter is an IDfId object.

Move Operation

The move operation will move objects from one location to another in the repository.  It performs all necessary linking and unlinking of objects.  If the object to be moved is a virtual document or a folder, all of the object’s substructures will be moved also.


private void doMoveOp(ArrayList objList, IDfFolder fromFolder, IDfFolder toFolder ) {

  try {

    // #1 - manufacture an operation
    IDfMoveOperation moveOpObj = cx.getMoveOperation();

    // #2 - add objects to the operation for processing
    for (IDfSysObject sObj : objList) {
      moveOpObj.add(sObj);
    }

    // #3 - set the source and target folder
    moveOpObj.setDestinationFolderId(toFolder.getObjectId());
    moveOpObj.setSourceFolderId(fromFolder.getObjectId());

    // #4 - execute the operation
    boolean result = moveOpObj.execute();

    // #5 - check for errors
    if (!result) {
      IDfList errors = moveOpObj.getErrors();
      for (int i=0; i<errors.getCount(); i++) {
        IDfOperationError err = (IDfOperationError) errors.get(i);
        System.out.println("Error in Move operation: " + err.getErrorCode() + " - " + err.getMessage());
      }
    } else {
      // #6 - get new obj ids
      IDfList newObjs = moveOpObj.getObjects();
        for (int i=0; i<newObjs.getCount(); i++) {
          IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
          System.out.println("\tmoved object " + sObj.getObjectId().toString());
        }
    }

  } catch(Exception e) {
    System.out.println("Exception in Move operation: " + e.getMessage());
    e.printStackTrace();
  }

}

Note with the move operation you must provide the object id for the source folder of the object you are moving, in addition to the destination. This is necessary for the unlink to occur. If you add objects to the operation’s node tree that are in different folders you will need to indicate the source folder for each object as it is added to the tree. In this case, you would not set the setSourceFolderId parameter and change the code that adds objects to the operation’s node tree (#2) to look like this:


// #3 - add objects to the operation for processing
  for (IDfSysObject sObj : objList) {
  IDfMoveNode node = (IDfMoveNode) moveOpObj.add(sObj);
    node.setDestinationFolderId(toFolder.getObjectId());
    node.setSourceFolderId(sObj.getFolderId(0));
  }

This format for adding objects to the operation is actually valid for all of the operation classes. So, if you need more control over the objects you are adding to an operation, it can be achieved like this.

In the next post I’ll show you the delete operation.

DFC DfOperations Classes – Part 4

This post will build upon the last post and demonstrate how to reverse the Checkout operation with either a Checkin operation or a Cancel Checkout operation.

Checkin Operation

The checkin operation does all the things you would expect it to:  versions the content file appropriately, transfers content, unlocks objects, patches XML files, updates the registry, and cleans up local files.


private void doCheckinOp(ArrayList objList) {

 try {

   // #1 - manufacture an operation
   IDfCheckinOperation CheckinOpObj = cx.getCheckinOperation();

   // #2 - add objects to the operation for processing
   for (IDfSysObject sObj : objList) {
     CheckinOpObj.add(sObj);
   }

   // #3 - set operation params
   CheckinOpObj.setCheckinVersion(DfCheckinOperation.NEXT_MINOR);
   CheckinOpObj.setKeepLocalFile(false);

   // #4 - execute the operation
   boolean result = CheckinOpObj.execute();

   // #5 - check for errors
   if (!result) {
     IDfList errors = CheckinOpObj.getErrors();
       for (int i=0; i<errors.getCount(); i++) {
	IDfOperationError err = (IDfOperationError) errors.get(i);
	System.out.println("Error in Checkin operation: " + err.getErrorCode() + " - " + err.getMessage());
       }
   }

   // #6 - get new obj ids
   IDfList newObjs = CheckinOpObj.getNewObjects();
   for (int i=0; i<newObjs.getCount(); i++) {
     IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
     System.out.println("\tchecked in " + sObj.getObjectId().toString());
   }

 } catch(Exception e) {
   System.out.println("Exception in Checkin operation: " + e.getMessage());
   e.printStackTrace();
 }

}

The Checkin operation does not depart from the basic form of the operation code discussed previously, but look at its power and simplicity. Only two notes to make:

  • #3 – set the Checkin operation-specific parameters.  In this case, indicate how to handle versioning. (Versioning behavior is defined by an integer constant;  see the DFC Javadocs.)  And second, indicate what to do with the local content, keep it or delete it.
  • # 6 – Notice here that I use the getNewObjects() method to get the object ids of the newly checked in objects.  To get the original object ids, use the getObjects() method.

Take a look back at some of the code you have written for doing checkins and see how it compares with the compactness and verbosity of these 40 lines of code. The DfCheckinOperation offers a ton of function and capability in a compact space.

CancelCheckout Operation

The Cancel Checkout operation completely nullifies a check out by unlocking objects in the Docbase (including virtual document children and XML nodes), removing local content, and updating the registry.


private void doCancelCheckoutOp(ArrayList objList) {

  try {

    // #1 - manufacture an operation
    IDfCancelCheckoutOperation cancelCheckoutOpObj = cx.getCancelCheckoutOperation();

    // #2 - add objects to the operation for processing
    for (IDfSysObject sObj : objList) {
    	cancelCheckoutOpObj.add(sObj);
    }

    // #3 - set operation params
    cancelCheckoutOpObj.setKeepLocalFile(false);

    // #4 - execute the operation
    boolean result = cancelCheckoutOpObj.execute();

    // #5 - check for errors
   if (!result) {
     IDfList errors = CheckinOpObj.getErrors();
       for (int i=0; i<errors.getCount(); i++) {
	IDfOperationError err = (IDfOperationError) errors.get(i);
	System.out.println("Error in Cancel Checkout operation: " + err.getErrorCode() + " - " + err.getMessage());
       }
   }

   // #6 - get new obj ids
   IDfList newObjs = CheckinOpObj.getObjects();
   for (int i=0; i<newObjs.getCount(); i++) {
       IDfSysObject sObj = (IDfSysObject) newObjs.get(i);
       System.out.println("\tcancelled checkout " + sObj.getObjectId().toString());
   }

  } catch(Exception e) {
    System.out.println("Exception in Cancel Checkout operation: " + e.getMessage());
    e.printStackTrace();
  }

}

Again, the only thing of note about this operation is the operation-specific parameter set at #3.

The next post will look at copying and moving objects in the Docbase.

%d bloggers like this: