book
Article ID: KB0074978
calendar_today
Updated On:
Description
We may need to configure HTTP session replication with load balancer cluster setup in MDM application.
Issue/Introduction
Configure session replication with cluster setup in load balancer application configuration
Resolution
Listed below are the instructions to do the HTTP session replication.
Configure HTTP session replication in Wildfly. -- This should be the same for EAP as well.
1) To enable session replication in MDM,
Need to add <distributable/> tag in the *ECM.ear/EML.war/WEB-INF/web.xml* descriptor.
web.xml
------------
<web-app>
....
....
<distributable/>
</web-app>
In the ECM.ear/EML.war/WEB-INF/jboss-web.xml,update jboss-web.xml with below changes:
<jboss-web>
<replication-config>
<replication-trigger>SET_AND_NON_PRIMITIVE_GET</replication-trigger>
<replication-granularity>SESSION</replication-granularity>
</replication-config>
</jboss-web>
2) While starting the JBOSS server, export the variable as MQ_HTTP_SESSION_REPLICATION_ENABLED=true.
Please do not set this parameter as -D argument to java. Set it as System Parameter.
3) We need to use *standalone-ha.xml* for session replication. So please copy the existing updates from the standalone.xml to standalone-ha.xml and then make the following changes in the standalone-ha.xml to enable the session replication.
Change
<socket-binding name="jgroups-tcp" interface="private" ....../>
to
<socket-binding name="jgroups-tcp" interface="public" ....... />
Changes to the jgroups subsystem. (Highlighted are the changes)
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<!--channel name="ee" stack="tcp"/-->
<channel name="ee" stack="*tcpping*"/>
</channels>
<stacks>
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"
socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
<!--stack name="tcp"-->
<stack name="*tcpping*">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="*TCPPING*">
*<property name="initial_hosts">10.7.2.141[37620],10.7.2.142[37620],10.7.2.143[37620],10.7.2.144[37620],10.7.2.145[37620]</property>
<property name="port_range">10</property>
<property name="timeout">3000</property>
<property name="num_initial_members">5</property>*
</protocol>
<!--protocol type="MPING" socket-binding="jgroups-mping"/-->*
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"
socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
timeout -- specifies the maximum number of milliseconds to wait for any responses. The default is 3000.
num_initial_members -- specifies the maximum number of responses to wait for unless timeout has expired. The default is 2.
initial_hosts -- is a comma-separated list of addresses (e.g., host1[12345],host2[23456]) for pinging.
port_range -- specifies the number of consecutive ports to be probed when getting the initial membership, starting with the port specified in the initial_hosts parameter. Given the current values of port_range and
initial_hosts above, the TCPPING layer will try to connect to hosta:2300,hosta:2301, hosta:2302, hostb:3400, hostb:3401, hostb:3402, hostc:4500,hostc:4501, hostc:4502. The configuration options allow for multiple nodes
on the same host to be pinged.
The jgroups port is specified as *7600*. It uses a specified offset. So while at server startup, port offset is specified as X then port need to be specified as X+7600 in initial hosts list e.g. port offset value is
30020 and jgroups port is 7600 then in initial_hosts the port need to be 37620
4) Add OS level config change to increase the buffer size on the below files:
/etc/sysctl.d/net.core.rmem_max.conf
/etc/sysctl.d/net.core.wmem_max.conf
# Allow a 25MB UDP receive buffer for JGroups
net.core.rmem_max = 26214400
# Allow a 1MB UDP send buffer for JGroups
net.core.wmem_max = 1048576
5)To leverage these buffer settings, add below proerpties in standalone-ha.xml
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp">
<!-- Configure here for UDP send/receive buffers (in bytes) -->
<property name="ucast_recv_buf_size">20000000</property>
<property name="ucast_send_buf_size">640000</property>
<property name="mcast_recv_buf_size">25000000</property>
<property name="mcast_send_buf_size">640000</property>
</transport>
.....
.....
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp">
<!-- Configure here for TCP send/receive buffers (in bytes) -->
<property name="recv_buf_size">20000000</property>
<property name="send_buf_size">640000</property>
</transport>
.....
.....
</stack>