LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: HA-LVS DR ip_finish_output: bad unowned skb

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: HA-LVS DR ip_finish_output: bad unowned skb
Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxx>
Cc: OpenSSI Developers <ssic-linux-devel@xxxxxxxxxxxxxxxxxxxxx>
From: Horms <horms@xxxxxxxxxxxx>
Date: Mon, 5 Sep 2005 12:50:17 +0900
On Sun, Sep 04, 2005 at 10:13:16PM -0400, Roger Tsang wrote:
> Hey guys,
> 
> I'm running a streamed inline LVS-DR setup with "sed" scheduler where the 
> directors are also realservers in itself. Incoming traffic goes to only one 
> of the directors which has the VIP, so the other director is passive (for 
> failover). This has worked wonderfully in kernel-2.4. However with 
> kernel-2.6's new ipvs code, I see that the passive director is also trying 
> to LVS-DR route already loadbalanced packets received from its internal 
> (eth1) interface.
> 
> This happens if its own weight is lower than the other realservers. If you 
> see the passive director's stack (below), it is trying to loadbalance 
> packets "back to" the active director with the higher weight.
> 
> Depending on how the packets are assigned by weight some of the packets 
> would then bounce back and forth between active/passive directors in a loop 
> essentially causing a DoS of the entire cluster.
> 
> A very brief code comparision between kernel-2.6.13 and kernel-2.4.22's 
> ip_vs_in() function shows you guys removed the !sysctl_ip_vs_loadbalance 
> if-test to immediately accept packets. How would streamed inline 
> loadbalancing work without that test?
> 
> /*
> * Accept the packet if /proc/sys/net/ipv4/vs/loadbalancing
> * is 1 
> */
> if(!sysctl_ip_vs_loadbalance) {
> return NF_ACCEPT;
> }

I would have thought the best approach would be to
clear the LVS table on the stand-by director. Wouldn't
the proc value above need to be twiddled when the
stand-by director becomes the active director anyway?

-- 
Horms

<Prev in Thread] Current Thread [Next in Thread>