LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Suspecting bug when there is a lot of connections..

To: <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Suspecting bug when there is a lot of connections..
Cc: <herve@xxxxxxxxxxxx>
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Thu, 11 Oct 2001 20:52:21 +0800 (CST)
Hi,

I just did some syn-flooding tests to the ipvs box. The number of inactive
connections reach over 200,000 soon, but the system is almost idle.

[root@koala ipvs]# ipvsadm -ln
IP Virtual Server version 0.9.4 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.26.20.118:80 wlc
  -> 172.26.20.114:80             Route   2      0          215767
TCP  172.26.20.118:10000 wlc
  -> 172.26.20.114:10000          Route   1      0          0
TCP  172.26.20.118:21 wlc persistent 360
  -> 172.26.20.114:21             Route   1      0          0
TCP  172.26.20.118:23 wlc
  -> 172.26.20.114:23             Route   1      0          0
[root@koala ipvs]# ipvsadm -ln
IP Virtual Server version 0.9.4 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.26.20.118:80 wlc
  -> 172.26.20.114:80             Route   2      0          221145
TCP  172.26.20.118:10000 wlc
  -> 172.26.20.114:10000          Route   1      0          0
TCP  172.26.20.118:21 wlc persistent 360
  -> 172.26.20.114:21             Route   1      0          0
TCP  172.26.20.118:23 wlc
  -> 172.26.20.114:23             Route   1      0          0
[root@koala ipvs]# ipvsadm -ln
IP Virtual Server version 0.9.4 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.26.20.118:80 wlc
  -> 172.26.20.114:80             Route   2      0          232129
TCP  172.26.20.118:10000 wlc
  -> 172.26.20.114:10000          Route   1      0          0
TCP  172.26.20.118:21 wlc persistent 360
  -> 172.26.20.114:21             Route   1      0          0
TCP  172.26.20.118:23 wlc
  -> 172.26.20.114:23             Route   1      0          0

[root@koala ipvs]# vmstat
   procs                      memory    swap          io     system         cpu
 r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id
 0  0  0      0  49768   1760  31736   0   0    12     5 3044    16   2  95   3


31 processes: 29 sleeping, 2 running, 0 zombie, 0 stopped
CPU states:  0.0% user, 96.4% system,  0.0% nice,  3.6% idle
Mem:   127268K av,   77636K used,   49632K free,       0K shrd,    1760K
buff
Swap:  136544K av,       0K used,  136544K free                   31736K
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
    3 root      20   0     0    0     0 SW   95.1  0.0  38:42 kapm-idled


The number of connections doesn't matter a lot, because the overhead of
hash table collision is not very high. I guess that there is something
wrong with your configuration/system, please check it.

Regards,

Wensong



On 11 Oct 2001, Hervé Guehl wrote:

>
> Hi, I made a little with lvs simply forward the http port with a lvs nat
> entry.. Then I use hammerhead (http://hammerhead.sourceforge.net/) to
> test the behaviour of the stuff under heavy load.
>
> My linux LVS box hanged (no kernel panic .. but the machine freezed, was
> around 28000 inactive connections).
> After reboot.. I could see in the log something that looks like a kind
> of "mempry overflow"..
>
> Here's what I could see :
>
> Oct 11 00:08:51 tds101 keepalived[1428]: TCP connection to
> [192.168.39.199:80] success.
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@(3)  - end the application or thread (and invoke exit handlers)
> Tcl_DeleteExitHandler [Tcl_Finalize] (3)  - end the application or
> thread (and invoke exit handlers)
> Tcl_DeleteExitHandler [Tcl_FinalizeThread] (3)  - end the application or
> thread (and invoke exit handlers)
> Tcl_DeleteFileHandler (3)  - associate procedure callbacks with files or
> devices (Unix only)
> Tcl_DeleteFileHandler [CrtFileHdlr] (3)  - associate procedure callbacks
> with files or devices (Unix only)
> Tcl_DeleteFileHandler [Tcl_CreateFileHandler] (3)  - associate procedure
> callbacks with files or devices (Unix only)
> Tcl_DeleteHashEntry  (3)  - procedures to manage hash tables
> Tcl_DeleteHashEntry [Hash] (3)  - procedures to manage hash tables
>
> Tried with lvs 0.8.1 and lvs 0.9.4. Kernel is 2.4.10
>
> After recompiling ip_vs modules with "#define CONFIG_IP_VS_TAB_BITS
> 16" this behaviour was more difficult to obtain (I have only one
> "hammer" machine)..
>
> I dont know about the behaviour of LVS when reaching the max connection
> but would it be possible to drop new connections in this case and push
> something in syslog ? (as does iptables ..)
>
> Another thing seems, that packet unrelated to ipvs contrack are dropped
> without any message.. Would it be possible to have a message or to let
> the packets go thru the machine ??
>
> Regards
> Hervé
>
>
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
>



<Prev in Thread] Current Thread [Next in Thread>