Re: Realserver failover problem using ssl and tomcat

To: "Horms" <horms@xxxxxxxxxxxx>
Subject: Re: Realserver failover problem using ssl and tomcat
Cc: lvs-users <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: "Jason Downing" <jasondowning@xxxxxxxxxxxxxxxx>
Date: Thu, 29 Jun 2006 16:49:52 +1000
Thanks Horms that fixes the problem. However the system resets this value to 0 whenever Debian starts. I've put the value into sysctl.conf by adding:

net.ipv4.vs.expire_nodest_conn = 1

and if I run sysctl -p it updates the variable to 1. However on restart the variable is back to 0. I have worked out that this is because the vs directory is deleted on boot (or maybe shutdown) and not re-created until an ipvsadm command is issued. This means that the entry in sysctl.conf has no effect because the directory where the file goes is not present at the time the sysctl.conf file is used.

I have written a init.d script (yes I know its a complete hack) which runs when the other init.d startup scripts run, and this causes the directory to be created (by issuing /sbin/ipvsadm -L -n) and then issues sysctl -p to put the variable in place. Here is the hack:

/sbin/ipvsadm -L -n
sleep 1
sysctl -p

Then I used:

/usr/sbin/update-rc.d expire_nodest_conn start 75 2 3 4 5 . stop 05 0 1 6 .

to make it run on boot. I also put the line:

net.ipv4.vs.expire_nodest_conn = 1

into /etc/sysctl.conf

Thanks for the help, Jason

----- Original Message ----- From: "Horms" <horms@xxxxxxxxxxxx>
To: "Jason Downing" <jasondowning@xxxxxxxxxxxxxxxx>
Cc: "lvs-users" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Sent: Wednesday, June 28, 2006 4:52 PM
Subject: Re: Realserver failover problem using ssl and tomcat

On Wed, Jun 28, 2006 at 04:41:29PM +1000, Jason Downing wrote:
I'm talking about existing connections. I'm pretty sure new connections are
absolutely fine, but I will test this to make sure, thanks for reminding
me. I am not using persistence (although I have tried it and results were
the same).

Ok, if it is existing connections that are the problem, then this is
an expectied behaviour, which can be changed using

   expire_nodest_conn - BOOLEAN

       0 - disabled (default)
       not 0 - enabled

       The default value is 0, the load balancer will silently drop
       packets when its destination server is not available. It may
       be useful, when user-space monitoring program deletes the
       destination server (because of server overload or wrong
       detection) and add back the server later, and the connections
       to the server can continue.

       If this feature is enabled, the load balancer will expire the
       connection immediately when a packet arrives and its
       destination server is not available, then the client program
       will be notified that the connection is closed. This is
       equivalent to the feature some people requires to flush
       connections when its destination is not available.

I will try a 2.6 kernel and let you know the results. It will take me a
while to do because my previous limited experience of changing kernels has
always resulted in considerable head scratching....

I'm pretty sure you will get the same result, now that I think
its related to expire_nodest_conn.

I also have found out that there is a 60 second timeout in tomcat cluster
to declare a node dead. I am currently checking to see how to change this
to 2 seconds.

H: W:

<Prev in Thread] Current Thread [Next in Thread>