LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: load balancing trouble at a high load

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: load balancing trouble at a high load
From: malcolm <lists@xxxxxxxxxx>
Date: Thu, 25 May 2006 20:25:23 +0100
I think I'm right in saying (and I'm only wrong 72.3% of the time)

Hideaki Kondo wrote:
(4)And then recover the NIC(eth0) of RS2 intentionally by executing manually
   "/etc/init.d/network restart".
   After a while, LB1 starts sending http packets to RS1 and RS2 in spite of
   still weight 0 of RS2. Moreover, LB1 is sending the packets to RS2 much
   less than RS1.
(This strange behavior continues permanently. So I think the cause of the behavior isn't always in a retransmit process of TCP Layer.
   In fact, the strange behavior stops when i stop the high load from CL1)
Thats the default behavior of LVS:

expire_nodest_conn - BOOLEAN

        0 - disabled (default)
        not 0 - enabled

        The default value is 0, the load balancer will silently drop
        packets when its destination server is not available. It may
        be useful, when user-space monitoring program deletes the
        destination server (because of server overload or wrong
        detection) and add back the server later, and the connections
        to the server can continue.

        If this feature is enabled, the load balancer will expire the
        connection immediately when a packet arrives and its
        destination server is not available, then the client program
        will be notified that the connection is closed. This is
        equivalent to the feature some people requires to flush
        connections when its destination is not available.

(6)Then stop all high load (while_wget & while_ab) from CL1, and wait for a few
   minutes by becoming to be close to 0 about ActiveConns + InActiveConns.
   And start a new high load from CL1 by while_wget & while_ab, then
   LB1 is correctly and evenly loadbalancing to RS1 and RS2 as same as (1)

Because the connection template is now clean.

<Prev in Thread] Current Thread [Next in Thread>