Save the current pipeline to a named memory location on the Integration Server.

 
 

Set the "$merge" variable in restorePipeline or "merge" in restorePipelineFromFile to be true. This will ensure that everything in the pipeline before a restorePipeline step is preserved.

 
 

If you have adapter notifications and adapter services then you will need to have two separate connections. Otherwise you may get strange errors about transactions and the like. You should also avoid having connection pools shared across different functional areas, even if they are pointing to the same database. The reason for this is that tuning the size of the pool becomes quite difficult if you have multiple types of usage of a pool. You are also unable to easily change the database settings for one without impacting on the other. One approach that seems to work quite well is to have separate pools for each package (generally.. not a hard and fast rule though), as your packages should generally be divided up according to functional area too.

 
 

Although there is no tab to specify "order by" the same functionality is able to be specified in the "SELECT" tab. One of the columns in the is labelled "Sort Order" which will allow you to specify the column(s) you wish to sort by. To alter the order: simply alter the order of the columns selected.

 
 

Create a java service
Create a private static object on the shared source (private static Object LOCK_OBJ = new Object(); )
Have the code below on the java service source: IDataCursor idc = pipeline.getCursor();
IDataCursor idcResult = null;

try {
// put this section into a critical section to ensure single-threaded execution

synchronized(LOCK_OBJ)
{
Execute a flow service using Service.doInvoke
}
idc.destroy();
idcResult.destroy();

} catch (Exception exc){
ServerAPI.logError(exc);
idc.destroy();
throw new ServiceException(exc.toString()); }\\

 
 

There is a special variable on the pipeline called $iteration which will be incremented as the loop operator works up through the list.

 
 

Set a flag in your catch block or leave a variable holding the error message in the pipeline.
Outside the catch block put a branch on that variable or flag and if it is non-null then exit with failure or call the service that generates the exception.

 
 

Flow is not turned into java code. It resides on disk as XML representing the flow operations which is then parsed and turned into an in-memory java tree of the operations. Although the underlying code that implements the flow operations is java, it is stored on disk as XML.

 
 

Drop pipeline is an explicit cleanup. It is a request for the pipeline to remove a variable from the available list of variables and make the object it refers to available for garbage collection by the Java Virtual Machine.
Delete is purely a design time operation to remove the variable from the current view. It is only of use if you have created a variable that you didn't mean to create. If you delete a variable that was there because it was previously in the pipeline when you change the view in developer you will see the variable appear again.

 
 

To change to the JDK used by webMethods you will need to edit the IntegrationServer\bin\server.bat or IntegrationServer/bin/server.sh file used to start up Integration Server.
Edit the file and change the following line to point to the JDK path


SET JAVA_DIR=C:\opt\j2sdk1.4.2