Why is my EMS server footprint very large? I get "SEVERE ERROR: No memory ....."

Why is my EMS server footprint very large? I get "SEVERE ERROR: No memory ....."

book

Article ID: KB0093126

calendar_today

Updated On:

Products Versions
TIBCO Enterprise Message Service -
Not Applicable -

Description

Resolution:
Facts:

The TIBCO EMS Server "Message Memory Usage" value accounts only for the actual memory used for holding messages and the buffers used to construct messages out from the socket on a per connection basis.
"Message Memory Usage". Not for memory used for other purposes, such as lists, hash tables, the index of written records, etc.  

A lot of factors can cause tibemsd process memory footprint grow beyond your expectation, such as:

-- Number of the pending messages
-- Buffers for each connected connections
-- Number of durables
-- Number of connection objects, session objects, producers, consumers, temporary destinations ......
-- How badly the db files are fragmented
-- How many pending messages in the CM ledger file, if you export EMS messages to RV CM messages
-- The size of reserved memory
-- Number of routes
-- ......

Trouble Shooting:

If you want TIBCO Support to help you identify what is the reasons of EMS server high footprint issue, please send us the following information:

1. Send us the output of the following tibemsadmin command:
-- time
-- info
-- show queues
-- show topics
-- show stat consumers
-- show stat producers
-- show consumers full  (After EMS 4.4.0)
-- show connections full (After EMS 4.4.0)
-- show db
-- show routes
-- show durables

2. Send us the full EMS Server log file.

3. Send us the all .conf files

4. If you are exporting EMS messages to RV CM messages, please send us the RV CM ledger file.

5. If it is possible, please send us the problematic EMS server core file and stack trace.
  
6. Sometimes tcpdump raw packet capture may help us to isolate the problem, so if it is possible, run tcpdump/windump/snoop... for about
5 minutes.

Analyzing:

1. Verify if the EMS server has a lot of pending messages. If you notice a lot of pending messages in the "info" results.
E.g.
Pending Messages:         27000
Pending Message Size:     9.1 GB
Message Memory Usage:     29.6 MB out of 256MB

If you have a lot of pending messages, then the tibemsd process memory will be high, tibemsd’s process memory could be 4 or 5 times bigger than the size of max_msg_memory.

2. Verify the "show db" results to see if the size of the any db files is huge (&gt1GB) and there is a lot of free space.

3. if you noticed a lot of connections/sessions/producers/consumers/temporary destinations/..... objects, you should verify if the number of those objects are expected, if this is not the case, you should investigate your application with object leakage.

4. If you are exporting EMS messages to RV CM messages, and the size CM ledger file is huge, you should investigate why so many pending messages in the CM ledger file, make sure the all CM listeners are up running.

5. If you are having a lot of routes in your configuration, especially the problem tibemsd is the central route, you should verify the route status and the route protocol messages.

6. If you have 100 durables on a topic and 99 of them have acked all their messages but the 10th one is offline, you will have all the messages saved for the 100th durable along with 99 acks for each message from the other 9 durables.  This can lead to high memory usage, large db files due to the large number of acks.  

Suggestions:

We can not limit EMS server process size currently. However, based on our experience most of the time Setting max_msg_memory to a appropriate value, and compacting the db files frequently really help to limit the EMS server process size.

1. As you know, the EMS Server process will be always substantially bigger than the value set by max_msg_memory. Still using that parameter will effectively limit the process size. Indeed, while the server process size will be bigger, it will be limited at *some* point and will not grow beyond. How much bigger is difficult to say and it is heavily platform-dependent and application-dependent.

If you want to use max_msg_memory to limit the actual tibemsd process size, you should measure the actual tibemsd size on a given platform, for a given max_msg_memory, for a given deployment scenario (nature of the messages, size, delivery mode, etc). You should test your particular scenario on your deployment platform and check what the actual size will be for a given max_msg_memory value. Once tested, the tibemsd size will be always pretty close to that value.

Based on our experience, a lot of situations, tibemsd’s process memory footprint could be 4 or 5 times of the size of max_msg_memory, when the max_msg_memory limit is reached, and OS has a limitation on process memory, so that, in general, if you have not run above tests for your application, you should set the max_msg_memory<="O
S process limit"/4. E.g.

32-bits Windows: <=512MB
32-bits HPUX: <=256MB

You would need to use the 64-bit version (tibemsd64) if you need to store a huge number of messages.

2. Periodically compacting db files, please make sure you use timeout with compact command, as all messaging operations are suspended during the execution of the an admin command, the compact is one of them.

3. Sometimes, EMS destination "flow-control" may help. But please note, destination "Maxbytes" may discard messages, and flowControl works only if there is a consumer actively consuming.

Issue/Introduction

Why is my EMS server footprint very large? I get "SEVERE ERROR: No memory ....."