Sizing Conditions for BusinessConnect Container Edition

Sizing Conditions for BusinessConnect Container Edition

book

Article ID: KB0137994

calendar_today

Updated On:

Products Versions
TIBCO BusinessConnect Container Edition 1.6.0

Description

Performance considerations

Before we make any sizing recommendations for each server pod, let’s review their functionality in terms of their performance requirements.

image.png

Gateway Server

The GS is primarily a “pipe”, and is designed with minimal functionality.  Its main purpose is to proxy inbound HTTP/S/CA. It stores no information locally. For redundancy in high traffic situations, a minimum of two GS servers should be used in different pods.  A load balancer should be placed in front of the GS input port to distribute inbound traffic among the GS servers.

Interior Server

The IS is the main engine of BCCE, where inbound and outbound transactions are relayed between the BusinessWorks private processes and the public transports.  The IS sends outbound requests to trading partners directly, or through an HTTP/FTP/SOCK5 proxy if configured.  The IS also generates events that get forwarded to AuditSafe for logging, and reads configuration information from the BC database.

 

Multiple IS instances can be implemented across multiple pods to scale up transaction processing capacity.  While its main function is formatting and moving transactions, the IS also performs a number of compute-bound processes associated with encryption/decryption and signature generation/verification.

Poller Server

The Poller Server performs a number of housekeeping functions for the BC instance.  Some of the specific functions:

  • Runs all inbound SFTP/FTP/Email client transaction on a periodic basis
  • Initiates all batch processing in IS
  • Initiates all transactions marked for Resends
  • Cleans up transactions that have expired (such as MDNs or overdue ACKS) or timed out (responses not received from BW or public transports).
  • Check for the expiration of certificates in the Certificate Store

 

Only one poller server can run in a given BCCE instance.  The PS is mainly IO-intensive (mainly queries to the BCCE DB and external servers).  The nature of the PS operation is periodic (i.e., lots of idle time between periodic performance spikes).

ConfigStore Management Server (CMS)

The ConfigStore Management Server handles all configuration CRUD operations to BCCE database (Oracle, MySQL, or MS-SQL Server) and serves as the API server for front-end components like BCCE-AS and PMP Webserver.  It is mainly a web server handling inbound requests from other BCCE components. Multiple CMS servers can run as they are stateless in nature.

Configuration API Server (CAS)

The Configuration API Management Server exposes REST API endpoints that help to  perform export and import functions; e.g., for data migration purposes.  Its use will be sporadic in most environments.  When invoked, it mainly does read/write operations to the BCCE configuration database.

Authentication Server

The Authentication Server authenticates and authorizes users of the BCCE / Auditsafe instances, and handles CRUD operations against the authentication database.  It is mainly I/O intensive.

AuditSafe

AuditSafe (TAS) serves as the backend database and search engine for BCCE transactions.

Web Server (AuditSafe-WS)

The Web Server is responsible for presenting Audit data for the user.  As such, it can be characterized as a typical webserver, wherein it just handles data requests and presentation.

Data Server (AuditSafe-DS)

The IS is where BCCE transactions are relayed from the IS and posted to the AuditSafe datastore (currently ElasticSearch and OpenSearch have been certified).  The DS is an IO-intensive component that does not do a lot of computation.

 

Multiple WS and DS instances can be implemented to improve throughput.

 

Sizing Exercise

Now that we understand the performance characteristics of each service, let’s apply this knowledge to a real-world situation.

Performance Benchmark

To establish a performance baseline, the TIBCO performance team conducted a continuous, multi-day benchmark test, which yielded the following results under the named conditions.

  • TIBCO Test Environment: The test utilized a two-node native Kubernetes cluster for both the initiator and responder components. No specific CPU or memory limits were configured on the pods to determine the system's maximum capacity. The test used a payload size of 2KB and has used an X12 850 (Purchase Order) operation with TA1 and 997 ACKS sent back.
  • Results: The setup successfully and consistently processed approximately 90,000 transactions per day over a steady four-day run.

Sizing and Scaling Recommendations

Let’s say we have the following data processing requirement in production:

  • 65,000 transactions per day
  • Average payload size of 2KB
  • Typical deployment without any special need (like placing GS in a separate region/cluster from IS)
  • Protocol: X12
  • Operations: typical, e.g., Purchase Order, Invoice, Ship Notice, Payment Notice

Based on the benchmark results and the customer's daily volume of 65,000 transactions, we provide the following sizing and scaling recommendations.

Pod Counts and Sizes

For the customer's current load, a baseline of two pods is advised for bcce-is, bcce-gs, fsrest, auditsafe-ds, bcce-ps (if using poller related config), with added below resources. This will provide sufficient capacity for the 65,000 transactions while maintaining adequate headroom for processing peaks. 

We recommend a scaling strategy of adding one pod for every 30,000 daily transactions. It is more effective to scale horizontally (adding more pods) than to scale vertically (increasing the CPU/memory of a few pods). This approach increases the total number of available server threads, which better handles a high volume of concurrent transactions and improves overall system resilience.

We suggest running a proof-of-concept (POC) in a non-production setting by processing a few thousand transactions spread throughout the day, mimicking the desired production end-state. This will help document the system's behavior and confirm that the proposed resource allocation is optimal, before extrapolating and deploying to production.

Component

Number of Pods

vCPUs per pod

Memory per pod

Gateway Service

2

3

8GB

EMS

3

4

8GB

FS REST

2

4

8GB

Admin Service

1

2

4GB

ConfigStore Management Service

2

2

6GB

Interior Service

2

4

8GBq

Poller Service

2

4

6GB

Data Server

2

2

4GB

Web service

1

2

4GB

BWCE Pods

As per use cases



1

2GB

BC-Partner management portal

1

2

4GB

BC-Aus

1

2

2GB

Horizontal Pod Autoscaling (HPA)

The core components for X12 processing (bcce-is, bcce-gs, fsrest, auditsafe-ds, bcce-ps) will experience constant utilization. To ensure performance during unexpected load spikes, we strongly recommend enabling the Horizontal Pod Autoscaler (HPA) in Kubernetes. The HPA will automatically provision new pods when resource utilization exceeds predefined thresholds and scale them down during quieter periods, ensuring both performance and cost-efficiency.

Other Considerations

The recommendations are based on our benchmark analysis. However, it is always best practice to validate performance within the customer's own environment, as performance may vary depending on each environment’s idiosyncratic characteristics.

The resource profile for the bcce-is, bcce-gs, fsrest, bcce-ps and auditsafe-ds pods may need to change based on the rate of incoming transactions:

  • Transaction Bursts: When transactions arrive in a burst, the pods become CPU-intensive to handle the immediate processing demands.
  • Steady Pace: When transactions are processed at a steady pace, the pods exhibit higher RAM utilization, as more resources are dedicated to managing data over time.

Therefore, it is crucial for customers to monitor their transaction patterns and adjust pod resource allocations (CPU and memory) accordingly to ensure optimal performance and stability, e.g., when you have more numerous transactions, or larger file sizes.

Test Environment

For testing environments, we recommend fewer numbers of pods and less resources per pod to save on cloud costs.

Component

Number of Pods

vCPUs per pod

Memory per pod

Gateway Service

1

2

8GB

EMS

1

2

4GB

FS REST

1

2

8GB

Admin Service

1

2

4GB

ConfigStore Management Service

1

1

2GB

Interior Service

1

2

8GB

Poller Service

1

2

4GB

AuditSafe data service

1

1

4GB

AuditSafe - web service

1

1

4GB

BWCE Pods

As per use cases

1

2GB

BC-PMP

1

1

2GB

BC-AUS

1

1

2GB

Conclusion

The performance characteristics of key BCCE services were discussed. Based on previously recorded benchmark results, pod counts and sizes were recommended to address a real-world scenario. Sizing is as much an art as it is a science. We recommend running your own benchmark in your POC environment to fine-tune the counts and CPU/RAM sizes of your production pods, using the recommendations as a starting point.

Issue/Introduction

Introduction

TIBCO Business Connect Container Edition (“BCCE”) is the modern, cloud-native version of TIBCO BusinessConnect (“BC”). Its cloud-native services run in Docker containers, which can be deployed and orchestrated using Kubernetes on any major cloud platform such as AWS, Azure, or Google Cloud. In this article we discuss an approach for counting and sizing the required pods running BCCE components in the cloud. In another article (BCCE Architecture - Components), the various components of BCCE are explained.