LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Problem with ghost connections

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: [lvs-users] Problem with ghost connections
From: Ruben Laban <r.laban@xxxxxx>
Date: Fri, 9 Nov 2007 14:35:32 +0100
Hello list,

No ideas on the issue stated below as of yet?

I'll try to reduce my question to a rather simple one:

What could cause entries in the IPVS table to remain in the established (or 
any other) state even though there is no traffic flowing between the 
CIP/VIP/RIP?
We're using persistency by the way. Though from what I've read, the given 
persistency timeout is only used once when a new connection arrives.

Regards,
Ruben

On Tuesday 06 November 2007, Ruben Laban wrote:
> Hello list,
>
> I'm having some odd issues with our loadbalancers. After a longer (don't
> have any exact numbers) period of non-interrupted uptime, the counters for
> our various real/virtual servers seem to get 'stuck'. The loadbalancer is
> running Suse Linux Enterprise Server 9 SP3:
> # uname -a
> Linux ismlnx-lb06 2.6.5-7.286-smp #1 SMP Thu May 31 10:12:58 UTC 2007
> x86_64 x86_64 x86_64 GNU/Linux
> # ipvsadm -v
> ipvsadm v1.24 2003/06/07 (compiled with getopt_long and IPVS v1.2.0)
>
> When I just checked the output of ipvsadm -L -n, I noticed that my
> workstation's ip address was listed various times, even though I hadn't
> access one of our loadbalanced sites in a while. Here's some details on
> what's happening .I verified that there was no traffic flowing between my
> workstation and the loadbalancer. The counters seem to get reset when they
> hit 0, which shouldn't be the case I'd say. Is this a known
> problem/bug/configuration issue? I'm using heartbeat+ldirectord (version
> 1.2.3) to control our loadbalance configurations.
>
> # while true ; do ipvsadm -L -nc | grep 10.0.0.66 ; echo ; echo === ; echo
> ; sleep 10 ; done
> TCP 00:36  FIN_WAIT    10.0.0.66:1929     213.247.48.203:80  127.0.0.1:80
> TCP 00:50  FIN_WAIT    10.0.0.66:1991     213.247.48.203:80  127.0.0.1:80
> TCP 00:05  FIN_WAIT    10.0.0.66:1964     213.247.48.203:80  127.0.0.1:80
> IP  00:46  ERR!        10.0.0.66:0        0.0.0.203:0       
> 213.247.48.25:0 IP  00:33  ERR!        10.0.0.66:0        0.0.0.203:0      
>  127.0.0.1:0 TCP 00:05  ESTABLISHED 10.0.0.66:2443     213.247.48.203:80 
> 213.247.48.25:80
>
> ===
>
> TCP 00:23  FIN_WAIT    10.0.0.66:1929     213.247.48.203:80  127.0.0.1:80
> TCP 00:38  FIN_WAIT    10.0.0.66:1991     213.247.48.203:80  127.0.0.1:80
> TCP 00:52  FIN_WAIT    10.0.0.66:1964     213.247.48.203:80  127.0.0.1:80
> IP  00:33  ERR!        10.0.0.66:0        0.0.0.203:0       
> 213.247.48.25:0 IP  00:20  ERR!        10.0.0.66:0        0.0.0.203:0      
>  127.0.0.1:0 TCP 00:52  ESTABLISHED 10.0.0.66:2443     213.247.48.203:80 
> 213.247.48.25:80
>
> ===
>
> TCP 00:10  FIN_WAIT    10.0.0.66:1929     213.247.48.203:80  127.0.0.1:80
> TCP 00:25  FIN_WAIT    10.0.0.66:1991     213.247.48.203:80  127.0.0.1:80
> TCP 00:39  FIN_WAIT    10.0.0.66:1964     213.247.48.203:80  127.0.0.1:80
> IP  00:20  ERR!        10.0.0.66:0        0.0.0.203:0       
> 213.247.48.25:0 IP  00:07  ERR!        10.0.0.66:0        0.0.0.203:0      
>  127.0.0.1:0 TCP 00:40  ESTABLISHED 10.0.0.66:2443     213.247.48.203:80 
> 213.247.48.25:80
>
> ===
>
> TCP 00:57  FIN_WAIT    10.0.0.66:1929     213.247.48.203:80  127.0.0.1:80
> TCP 00:12  FIN_WAIT    10.0.0.66:1991     213.247.48.203:80  127.0.0.1:80
> TCP 00:26  FIN_WAIT    10.0.0.66:1964     213.247.48.203:80  127.0.0.1:80
> IP  00:07  ERR!        10.0.0.66:0        0.0.0.203:0       
> 213.247.48.25:0 IP  00:54  ERR!        10.0.0.66:0        0.0.0.203:0      
>  127.0.0.1:0 TCP 00:27  ESTABLISHED 10.0.0.66:2443     213.247.48.203:80 
> 213.247.48.25:80
>
> Extra info: we do have a FWM #203 configured (pointing to 213.247.48.24 &
> 213.247.48.25), but (since quite a while) not for 213.247.48.203.




<Prev in Thread] Current Thread [Next in Thread>