book
Article ID: KB0089488
calendar_today
Updated On:
Description
Resolution:
Description:
= = = = = =
Route performance tuning for routes across a WAN.
1). There are two features that allow users to change a server's socket send and receive buffers size in order to tune the send and receive rate.
In tibemsadmin, you can use the following admin commands to change the socket buffer size:
set server socket_receive_buffer_size=nnKB
set server socket_send_buffer_size=nnKB
A “show buffers” command from tibemsadmin will display the current values for these parameters. If you have set these parameters from tibemsadmin, the new values will not be persisted after you restart the EMS server. If the buffer’s size needs to be changed permanently the following parameters can be placed in the tibemsd.conf file and then the server will need to be restarted:
socket_receive_buffer_size=nnKB
socket_send_buffer_size=nnKB
Testing of route connections could begin as soon as the server is restarted. You should place the above parameters in the main config file (tested with a size of 1MB ) and disconnect all clients before restarting both EMS servers. Make sure to test the EMS socket_receive_buffer_size and socket_send_buffer_size, along with their equivalent Linux kernel socket buffer parameters (net.core.rmem_max, net.core.wmem_max, net.ipv4.tcp_rmem and net.ipv4.tcp_wmem).
2). Test with an increased prefetch value, default queue prefetch value (*5) is too low for routes across a WAN. The default topic (*64) prefetch value should be fine most of the time. However, depending on the TCP packet RTT time, you may still need to increase the topic prefetch value. In our interval test, if the TCP packet RTT time (derived from ping results) is about 100ms, we would suggest you test with prefetch=256.
3). If the size of the mssage is bigger than 30K, please test with message compression. The message producer sets the EMS message property JMS_TIBCO_COMPRESS to be true.
4). If you still noticed the problem, please update TIBCO Support with the following information to debug the problem:
a). Tcpdump capture: “tcpdump –s 2000 –i <interface> -w <output file>”
b). tibemsadmin command results
c). #################################################
- time on
- timeout 120
- info
- show connections full
- show topics
- show queues
- show durables
- show routes
- show consumers full
- show stat producers
- show bridges
= show buffers
- show db
#################################################
d). Where are the db files located (local files system or mounted file system)?
e). Pstack <tibemsd pid> 5 captures at 3 second intervals.
f). CPU/memory utilization for tibemsd and system.
g). Set the log trace for both the EMS servers to “DEFAULT,+CONNECT,+PRODCONS,+ROUTE_DEBUG,+MSG. And send us the log files for both EMS servers.
Issue/Introduction
Route performance tuning across a WAN