LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] activeconns & inactconns variables

To: Jean-francois Nadeau <jf.nadeau@xxxxxxxxxxxx>
Subject: Re: [lvs-users] activeconns & inactconns variables
Cc: Ty Beede <tybeede@xxxxxxxxxxxxx>, Wensong Zhang <wensong@xxxxxxxxxxxx>, lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
Date: Fri, 18 Feb 2000 08:13:49 +0200 (EET)
        Hello,

On Thu, 17 Feb 2000, Jean-francois Nadeau wrote:

> Done some testing (netmon) on this and here's my observations :
>
> 1. A connection becomes active when LVS sees the ACK flag in the TCP header
> incoming in the cluster : i.e when the socket gets established on the real
> server.
>
> 2. A connection becomes inactive when LVS sees the ACK-FIN flag in the TCP
> header incoming in the cluster. This does NOT corespond to the socket
> closing on the real server.
>
> Example with my Apache Web server.
>
> Client   <--> Server
>
> A client request an object on the web server on port 80 :
>
> SYN REQUEST     ---->
> SYN ACK               <----
> ACK                        ----->  *** AcitveConn=1 and 1 ESTABLISHED socket
> on real server.
> HTTP get                -----> *** The client request the object
> HTTP response     <----- *** The server sends the object
> APACHE closes the socket : *** AcitveConn=1 and 0 ESTABLISHED socket on real
> server
> The CLIENT receives the object. (took 15 seconds in my test)
> ACK-FIN                -----> *** AcitveConn=0 and 0 ESTABLISHED socket on
> real server
>
>
> Conclusion : ActiveConn is the active number of CLIENT connections..... not
> on the server in the case of short transmissions like objects on a web page.
> Its hard to calculate a server's capacity based on this because slower
> clients makes ActiveConn greater than whats the server is really processing.

        In the LVS mailing list many people explained that the
correct way to balance the connections is to use monitoring
software. The weights must be evaluated using values from the real
server. In VS/DR and VS/TUN the Director can be easily fooled with
invalid packets for some period and this can be enough to inbalance
the cluster when using "*lc" schedulers.

> You wont be able to reproduce that effect on a LAN because the client
> receives the segment too fast.
>
> I reproduce the effect connecting at 9600 bps and getting a 100k gif from
> Apache while monitoring established sockets on port 80 on the real server
> and ipvsadm on the cluster.
>
>
> Hope this clarify a bit ActiveConn and InactiveConn....

        You are probably using VS/DR or VS/TUN in your test.
Right? Using these methods the LVS is changing the TCP state
based on the incoming packets, i.e. from the clients. This is
the reason that the Director can't see the FIN packet from the
real server. This is the reason that LVS can be easily SYN
flooded, even flooded with ACK following the SYN packet.
The LVS can't change the TCP state according to the state
in the real server. This is possible only for VS/NAT mode.
So, in some situations you can have invalid entries in
ESTABLISHED state not corresponding to the connections in
the real server which effectively ignores these SYN packets
using cookies. The VS/NAT looks the betters solution against
the SYN flood attacks. Of course, the ESTABLISHED timeout
can be changed to 5 minutes for example. Currently, the
max timeout interval (excluding the ESTABLISHED state) is
2 minutes. If you think that you can serve the clients
using smaller timeout for the ESTABLISHED state when
under "ACK after SYN" attack you can change it with
ipchains. You don't need to change it under 2 minutes
in LVS 0.9.7. In the last LVS version SYN+FIN switches
the state to TW which can't be controlled using
ipchains. In other cases you can change the timeout
for the ESTABLISHED and FIN-WAIT states. But you can
change it only down to 1 minute. If this can't help
by 2GB RAM or more for the Director.

        One thing that can be done but this is may be paranoia:

change the INPUT_ONLY table:

from:

           FIN
        SR ---> TW

to:

           FIN
        SR ---> FW


        OK, this is incorrect interpretation of the TCP states
but this is a hack which allows the min state timeout to be
1 minute. Now using ipchains we can set the timeout to all
TCP states to 1 minute.


        If this is changed you can now set ESTABLISHED and
FIN-WAIT timeouts down to 1 minute. In current LVS version
the min effective timeout for ESTABLISHED and FINWAIT state
is 2 minutes.

Regards

--
Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>


----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>