How are queues for multiple asynchronous container connections managed?

How are queues for multiple asynchronous container connections managed?

book

Article ID: KB0074460

calendar_today

Updated On:

Products Versions
TIBCO Streaming 7

Description

How are queues for multiple asynchronous container connections managed?

Issue/Introduction

How are queues for multiple asynchronous container connections managed?

Resolution

Every output stream gets a queue that serves all asynchronous clients that share the same filter expression (or "predicate"). That means a single output stream with multiple consumers may have more than one queue to serve different sets of consumers. The server manages a separate pointer into the queue for each downstream consumer. This avoids faster consumers from starving because slower consumers have not dequeued their tuples. The slowest consumer will determine how many tuples the server must preserve in the queue.

The queue will expand every time more space is needed to buffer upstream tuples to allow time for downstream consumers to dequeue them. Queues are designed to expand automatically when more space is needed to avoid blocking the server from processing new streaming data. The queue will expand to accommodate the slowest downstream consumer automatically.

For StreamBase: If the consumer is an internal container connection, there is no upper bound to queue size apart from the configured JVM Heap-memory available. If the consumer is external (a StreamBase Client API application, e.g.: external adapter, client, or second StreamBase Server instance) then the maximum queue size is established by the max-client-pages and page-size settings from the Server Configuration file (sbd.sbconf).

For TIBCO Streaming: If the consumer is an internal container connection or junction between two separate parallel regions, the upper bound may be optionally limited using StreamBaseEngine configuration forĀ parallelRegionQueues andĀ defaultMaxBufferSize. If a parallel region would exceed this limit it instead blocks until queue space is made available by downstream processing.

If a slow external client has forced a queue to grow to page-size(bytes)*max-client-pages then that client will be disconnected and the queue reduced to the size needed by the next slowest client. A client may always reconnect, but it will have missed receiving the data removed from the queue.

If you are seeing evidence of excessive queuing (high memory use, disconnected clients) then the downstream consumers need to be made faster, or take in less data. In a well running server, very little queuing occurs except to handle temporary increases in message rates.