Connection pooling vs Listener polling in TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ

Connection pooling vs Listener polling in TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ

book

Article ID: KB0071814

calendar_today

Updated On:

Products Versions
TIBCO ActiveMatrix BusinessWorks Plug-in for IBM MQ 7.7.0,7.6.0

Description

There are two types of pooling in "TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ"


1. Connection pooling  - This is present in the Pooling Tab of WMQ Connection Resource
2. Listener polling    - It is present in the Advanced Tab of WebSphere MQ Listener Activity 
 

Connection polling :- When this check box is selected, pooling is active for this connection. Connection pooling is irrelevant for listeners because each listener instance gets a connection and keeps it until it fails.

Listener polling :-  When selected, the activity instantiates a connection to the queue manager and opens the queue. It then periodically issues timed gets on the queue where the timeout value is equal to the polling timeout and the period between gets is equal to the polling interval. However, if there are multiple messages on the queue they are processed without an intervening polling interval, thus diminishing application latency. When cleared, the activity waits forever on the queue, processing messages as they arrive. If the queue manager connection fails the activity attempts to reconnect based on the reconnection interval.

Listener polling means that the MQ get api call has a timeout (user specified) after which the call will fail with a 2033 RC. The listener will then wait the Interval time before issuing another get. If the listener is not polling it will call an indefinite get and wait forever. Remember that no call to get can be safely interrupted hence some of the app failover/start/stop peculiarities we have. If the non polling get call fails for a reason other than a timeout, the connection is terminated, recreated and another get commences after the Reconnection Interval. For the listener pooling vs non pooling has nothing to do with the processing of messages. The flow is exactly the same.

For the listener if Polling is enabled then there will be intervals where the listener is not actively listening on the queue (because it is polling). So some extra latency is possible, but that latency is controllable.  Polling eliminates the stuck threads problem.

The client confirm option has the most dramatic effect on the processing of messages. If it is not on, the listener will receive messages and spawn new processes at the rate that messages arrive. It will be up to the engine to throttle the listener based on flow control parms.

If the listener has client confirm on then it will receive a message and start a process, then wait for the confirm activity to notify it that it can commit the message off the queue at which point it will get the next message.  This is very different and causes the process to be single threaded. That is why the "listener instances" parm exists. It is so that you can run multiple listeners in client confirm mode with good throughput.

The listener will not close connections or destinations between messages unless it gets an error. The put/get activities (if using pooled connections) will close destinations after the process ends, but not the connection because its pooled. If you pool the activity and not the connection it will keep both connection and destinations open across process flows.

Environment

OS : ALL

Issue/Introduction

Connection pooling vs Listener polling in TIBCO ActiveMatrix BusinessWorks Plug-in for WebSphere MQ