LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

RE: connections not expireing, kernel using over 400Megs

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: RE: connections not expireing, kernel using over 400Megs
From: Mark de Vries <markdv.lvsuser@xxxxxxxxxx>
Date: Tue, 3 May 2005 22:32:41 +0200 (CEST)
On Tue, 3 May 2005, Graeme Fowler wrote:

> On Tue 03 May 2005 18:56:04 BST , Mark de Vries
> <markdv.lvsuser@xxxxxxxxxx> wrote:
> > ANY help welcome... Right now I'm rebooting boxes every 24-48 hours... not
> > exactly what I had in mind when I thought LVS would help in creating a
> > high-available service... :(
>
> Hrm... as Joe said, maybe this is UDP related; that would be rather
> unusual though.
>
> I have a three-node DNS cluster using DR running a custom-compiled
> 2.4.23 kernel, but it isn't getting the query rate you're stating. I
> have no problems with it whatsoever. For comparison can you do:

I'm on 2.6.10 (kernel.org)

> ipvsadm -Ln
> ipvsadm -Ln --rate
>
> sanitising the output to protect the innocent (or guilty!), obviously ;-)

I have 9 VIPs, each with the same 4 realservers so thats a lot of lines...
I'll leave some stuff out...

> [root@dns02 root]# ipvsadm -Ln
> IP Virtual Server version 1.0.10 (size=65536)
> Prot LocalAddress:Port Scheduler Flags
>   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
> FWM  5 wlc
>   -> 192.168.80.4:53              Route   10     0          17287
>   -> 192.168.80.3:53              Route   10     0          17285
>   -> 192.168.80.2:53              Local   10     0          17286

Only the ones with the highest numbers:
  -> 10.31.66.181:53              Route   10     0          69327
  -> 10.31.66.180:53              Route   10     0          67688
  -> 10.31.66.179:53              Route   10     0          70264
  -> 10.31.66.178:53              Route   10     0          70084

  -> 10.31.66.181:53              Route   10     0          16394
  -> 10.31.66.180:53              Route   10     0          15003
  -> 10.31.66.179:53              Route   10     0          17054
  -> 10.31.66.178:53              Route   10     0          16129

  -> 10.31.66.181:53              Route   10     0          22597
  -> 10.31.66.180:53              Route   10     0          22259
  -> 10.31.66.179:53              Route   10     0          22664
  -> 10.31.66.178:53              Route   10     0          22653

  -> 10.31.66.181:53              Route   10     0          15913
  -> 10.31.66.180:53              Route   10     0          15967
  -> 10.31.66.179:53              Route   10     0          16100
  -> 10.31.66.178:53              Route   10     0          16228

> [root@dns02 root]# ipvsadm -Ln --rate
> IP Virtual Server version 1.0.10 (size=65536)
> Prot LocalAddress:Port                 CPS    InPPS   OutPPS    InBPS   OutBPS
>   -> RemoteAddress:Port
> FWM  5                                 127      312        0    22546        0
>   -> 192.168.80.4:53                    42      107        0     7811        0
>   -> 192.168.80.3:53                    40      101        0     7237        0
>   -> 192.168.80.2:53                    45      104        0     7498        0

Only the VIPs:

 ipvsadm -Ln --rate | grep UDP
UDP  :53                 98      244        0    16069        0
UDP  :53                266      724        0    46439        0
UDP  :53                  4       13        0      861        0
UDP  :53                309      705        0    45634        0
UDP  :53                460     1015        0    65995        0
UDP  :53                 24       93        0     6094        0
UDP  :53                  6       31        0     2090        0
UDP  :53                 15       25        0     1579        0
UDP  :53                 34       80        0     5458        0

I would say my rates are significantly higher... at peak hour the total is
even ~800 InPPS more...

> Have you fiddled with the bucket size, BTW?

I'm going to compile a new kernel tomorrow and raise that one... Someone
else compiled this one and I see it's still at the default 4k. Hope that
helps. I think if it does it's still a bug though. Larger bucket size
should only make finding connections more efficient (less CPU cycles)...

Rgds,
Mark.

<Prev in Thread] Current Thread [Next in Thread>