How to make only one node in the cluster connect to a service

How to make only one node in the cluster connect to a service

book

Article ID: KB0073848

calendar_today

Updated On:

Products Versions
TIBCO Streaming 10

Description

I have an external service that should only be connected to once by my account and I have a StreamBase EventFlow adapter that connects to it.
How can I ensure when running this application in a cluster of multiple nodes that only one node's adapters at a time are connected to this service?

Issue/Introduction

Configuration guidance

Resolution

On the adapter's Cluster Aware tab set:
  •   Cluster start policy = Active with specified partitions 
  •   Active Partitions = default-cluster-wide-availability-zone_VP_0
For example:
adapter Cluster Aware tab set with a single Active partition

The Active Partitions name assumes you are using the built-in partition, data distribution policy, and availability zone configuration. If you have customized these, then the name will be different. 

The Once operator with these Cluster Aware settings in a multi-node cluster emits a tuple each time its node becomes active for the specified partition. Note that with the default configuration there are 64 partitions. Each partition will independently and in a load-balanced way be assigned one of the available nodes to be the Active node for that partition. This assignment happens every time a new node joins the cluster or a node leaves the cluster. A node joins its cluster when ' epadmin start node' is run. This change of assignments is called re-balancing.

Since this setting affects all adapter instances the same way, only one node will be active for the named partition at a time. This ensures that:
  • only one node's adapter is running when there are one or more nodes in the cluster, and 
  • when a node newly becomes active, the adapter connects and the Once operator can supply any startup command tuples needed.
If the adapter is on a node which is not Active on the named partition, then after re-balancing it may be:
a) still not Active but only a Replica node, and it will remain disconnected, or
b) Active on the partition, and the adapter will start up and connect as configured.

If the adapter is on a node which is Active on the named partition, then after re-balancing it may be:
a) still Active on the partition and it will remain connected, or
b) not Active on the partition and the adapter will disconnect.

If you want two node's adapters connected at all times, then you may add a second partition to the Active Partitions list, or use the " Active on multiple nodes in the cluster" option.

The other Cluster Aware options are:
  • Start with module (not aware)
  • Don't automatically start (not aware)
  • Active on a single node in the cluster 
  • Active on an availability zone
  • Active on multiple nodes in the cluster 
  • Active with specified partitions 
These allow for other combinations of activity. The "Active on a single node in the cluster" option sounds like an identical setting to what is described above, but it is not. It is not associated with partitions, but with the count of active nodes in the cluster. If you want one of these named behaviors, use the named behavior instead of trying to build equivalent logic based on an explicit list of partitions. Test the behavior of the option selected to make sure it does what you require.

Note that the inactive node is still completely "live". The only difference is that tuples are not being emitted from the configured Cluster Aware adapter. The node is "warm", in that if data were to be provided it would work on it immediately. The node is not "hot" because data is not actively flowing. The node is not "cold" because all loading, initialization, and allocation has completed and threads are waiting. There is no specific switch to set to force a "cold", "warm", or "hot" state. The only switch is per-adapter and input-stream, whether they are emitting tuples or not.