TIBCO FOM 2.1.2 Hotfix04 is available

TIBCO FOM 2.1.2 Hotfix04 is available

book

Article ID: KB0102948

calendar_today

Updated On:

Products Versions
Not Applicable -

Description

Description:
The hotfix can be downloaded from the TIBCO Support ftp server, mft.tibco.com. Please use your eSupport username and password to access the server. After connecting, go to /AvailableDownloads/ActiveFulfillment/2.1.2/hotfix-04/ to download the hotfix. 

Instructions on how to apply the hotfix are provided in the TIB_af_2.1.2_HF-004_readme.txt file.

================================================================================
Closed Issues in 2.1.2_HF-004 (This release)

AF-6139
Monitoring the usage of the jdbc pool was not possible. The issue has been fixed 
and the following configuration has to be performed in the 
omsServerLog4j.xml file in the $AF_HOME/config/ location to enable the 
connection of pooling logs:

<category name="com.tibco.aff.oms.db.datasource.impl.FOMDataSourceProxy" 
additivity="false">
<priority value ="DEBUG"/>
<appender-ref ref="console" />
<appender-ref ref="LocalLogFileAppender"/>
</category>

AF-6134
The catalog was too slow to load and pending messages were seen on the 
tibco.aff.catalog.planfragment.request queue. The issue has been fixed.

AF-6077
Messages got piled up on tibco.aff.orchestrator.cache.addEvent queue, 
upon Orchestrator restart. The issue has been fixed.

AF-6073
The field ENDTIMESTAMP in the OMS PLAN_ITEM_DETAILS table was returning an empty 
value. The issue has been fixed.

AF-6067
Orders in PREQUALIFICATIONFAILED state could not be withdrawn or searched. The 
issue has been fixed.

AF-6048
Order amendment failed when the plan item was in ERROR_HANDLER state. The issue 
has been fixed.

AF-6043
An exception "ORA-00959: tablespace 'USERS' does not exist" was seen in some 
instances when executing the  OMS_DDL.sql script. The issue has been fixed and 
the resolution is as follows"

One of the following scenarios will be applicable for you. Perform the steps,
mentioned for the relevant scenario:

SCENARIO 1: User has successfully run the DDL of 2.1.0 without any exception

STEP 1:  Connect to SQLPlus with FOM User and execute the following SQL statement:
 
  SELECT CONSTRAINT_NAME FROM ALL_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'U' AND 
  TABLE_NAME = 'ORDERLINE_SLA_DATA' AND OWNER = '&TABLE_OWNER_NAME';
 
When prompted for TABLE_OWNER_NAME enter the FOM User Name. 
Note down the constraint name returned by this query.

STEP 2: Run the UpgradeOMS_FOM2.1.2HF3_to_FOM2.1.2HF4.sql 
This upgrade SQL would prompt you for entering UNIQUE_CONSTRAINT_NAME
Enter the constraint name from STEP 1.
SCENARIO 2: User encountered an exception while running DDL of 2.1.0 - 
    "ORA-00959: tablespace 'USERS' does not exist".
    
STEP 1: Run the UpgradeOMS_FOM2.1.2HF3_to_FOM2.1.2HF4.sql and ignore when 
        prompted for entering UNIQUE_CONSTRAINT_NAME.
        
NOTE: Ignore following Errors - 
- SQL Error: ORA-02441: Cannot drop nonexistent primary key
- SQL Error: ORA-02250: missing or invalid constraint name

AF-6035
The duplicate messages for the same execution response was cleaning up the actual 
batch tasks corresponding to the actual execution response when being processed 
by the batch processor. This caused the order to remain in Execution even though 
the the execution response was received. The issue has been fixed by preventing 
duplicate messages from removing the notifications being processed by the batch 
processor for the same message.

AF-6034
When trying to cancel an order the following error was seen "TIBCO-AFF-OMS-100065: 
Action and UDFs cannot be modified simultaneously in an order line". This 
happened due to a missing UDF in the cancellation request. The issue has been fixed.

AF-6031
An error "Sorting of planitem milestones failed due to invalid planfragment section" 
was seen when implementing a third amendment on orders. The issue has been fixed.

AF-6029
testJMSConnection TimeOut Exception was observed which was related to Spring DMLC 
and the external transaction manager. This occured because of a check in DMLC code 
to skip commit of sessions in case no message has been received by the message 
listener and TIBCO EMS has been used as provider. However, if external transaction 
manager has been used, it still does a commit on the session. The issue has been 
fixed by introducing a new property "com.tibco.fom.orch.jms.receiveTimeout" in 
ConfigValues_OMS_INTERNAL file to control the receiveTimeout property of all the 
JMS routes. The default value of 1 second has been changed to 60 seconds to reduce 
the commit calls by transaction manager in case of receive timeouts on the queues. 

AF-6026
Orchestrator load balancing was not working properly. The issue has been fixed.

AF-5987
Plan UDF was lost after order cancellation using orchestrator and OMS user 
interface. The issue has been fixed.

AF-5986
Cancelled order was being sent to AOPD if order was cancelled again. The issue 
has been fixed.

AF-5984
Order locking was utilizing more time which resulted in slow processing of the 
orders and slow reply from the OMS during cancellation of orders. The issue 
has been fixed and a new queue has been introduced for this named
tibco.aff.oms.ordersService.amendment.task.

AF-5974
ShouldFailedPlanItemSuspend not working when the plan item was in error handler 
state and an option to display an error message during an amendment or cancel 
action was required. The issue has been fixed.

AF-5972
Orchestrator was not sending a planItemMilestoneReleaseRequest when it was 
executing an amendment. This happened due to the leak of messages in the 
"tibco.aff.orchestrator.planItem.milestone.release.request" queue. 
The issue has been fixed.

AF-5970
During each jeopardy cycle, a high CPU and memory consumption was noticed. 
The issue has been fixed.

AF-5950
When an OMS Instance was down, all the orders handled by it were moved to the 
Manager member, so that it would handle all the load. This resulted in a 
bottleneck of thousands of orders for the instance that handled the load. The 
issue has been fixed by avoiding the centralization of moving all the orders, 
coming from members that are down, to just one member that is operative. The 
orders can be distributed to several members to avoid a bottleneck in one 
member.

Once a node goes down, the cluster manager assigns the load of failed node to 
be backed up by one of the running nodes. This node assignment happens taking 
into consideration the relative load of all the nodes currently activated in the 
cluster. The cluster manager fetches information related to how many plan items 
are being executed by different nodes in the cluster at that time. Based on this 
information, cluster manager picks up the least load node to be assigned as a 
back up for the failed node. This node is instructed by the cluster manager to 
start handling the failed node after which the backup node starts the backup 
process.

It is possible to place a limit on the number of failed nodes from the cluster 
that should be backed up. This is controlled by a property in 
ConfigValues_OMS_INTERNAL.xml file with the name 
com.tibco.fom.orch.cluster.backUpThreshold. It defines the percentage of nodes 
in the cluster whose failure will be handled by the cluster manager.

Another property by the name com.tibco.fom.orch.cluster.backUpTimeout has been 
defined which is used to terminate the backup processing in cases where the
backup node has not completed the backup in proper time.

AF-5940
The following issues were identified:
1. Incoming orders were getting processed when loading the models which 
resulted orders in OPD not using the latest published models.
2. Incoming orders were getting processed at the start of model purging.
3. Orchestrator did not retry the AOPD if the plan was not generated from the 
   updated models.
4. Idle time was not getting exposed to user for model loading.

The identified issues are fixed and the following are the resolution:

Once the model is received in FOM, the node will identify the member as the 
owner for the model loading process. The member will be responsible for 
coordinating with the model loading process and the rest of the members in 
the cluster. During loading or purging of models, the orchestrator will not 
process any incoming order events for orchestration.

The following steps are performed upon receiving the model:

1. Each member maintains a ledger of the model loading threads so that the 
   Model loading threads registers itself to the ledger before initiating 
   the model loading task.
2. Send a request to the member to set the flag, as an indication that the 
   model loading has started. To set this flag; the member with a ledger 
   having more than one record starts sending an advisory message on topic 
   'tibco.aff.orchestrator.cluster.advisory.heartbeat' indicating that model 
   loading has started. All the members will listen to this message and 
   will set the modelLoadingStarted flag to true. The member with the ledger 
   having more than one record will keep sending an advisory message until 
   the loading is complete.
3. Once modelLoadingStarted flag is set to true in all the members of the 
   cluster, the model loading task will be initiated by the Member. If 
   modelLoadingStarted is not set to true, then the member will wait for 
   the configured heartbeat interval time, and check the flag again, until 
   the modelLoadingStarted flag is set to true.
4. Corresponding model loading thread in the ledger is removed and the 
   publishing of the advisory messages for model loading is stopped after 
   completion of model loading task.
5. If no advisory message is received the members will set the 
   'modelLoadingStarted' flag to false after the threshold idle time.
   
   Threshold idle time can be configured using the property: 
   'com.tibco.fom.orch.cluster.modelLoading.maxIdleTime' under category 
   'Generic Configuration' in ConfigValues_OMS_INTERNAL.xml.
   
   <ConfValue description="Max Idle time reset the model loading process in 
   Milliseconds" isHotDeployable="true" name="Max Idle time reset the model 
   loading process in Milliseconds" 
   propname="com.tibco.fom.orch.cluster.modelLoading.maxIdleTime" 
   readonly="false" sinceVersion="2.1" visibility="Basic">
    <ConfNum default="120000" value="120000"/>
   </ConfValue>
   
   This property indicates that the members will set the modelLoadingStarted 
   flag to false if no advisory message is received for this period from 
   the owner node.
   
6. If there are any incoming requests to the orchestrator then it will check 
   the 'modelLoadingStarted' flag. If this flag is true then it will keep 
   waiting until flag is set to false to proceed further. Once this flag 
   becomes false, orchestrator request will be processed.

================================================================================

Issue/Introduction

TIBCO FOM 2.1.2 Hotfix04 is available