LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: random SYN-drop function

To: Wensong Zhang <wensong@xxxxxxxxxxxx>
Subject: Re: random SYN-drop function
Cc: Ratz <ratz@xxxxxx>, lvs-users@xxxxxxxxxxxxxxxxxxxxxx, cluster-list@xxxxxxxxxx
From: Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 16 Mar 2000 08:09:39 +0200 (EET)
        Hello,

On Thu, 16 Mar 2000, Wensong Zhang wrote:

> On Wed, 15 Mar 2000, Ratz wrote:
> 
> > I cannot get the point out of your new ip_vs_random_drop_syn function.
> > At which point could such a function be important? Or what exactly has
> > to occur that you want to drop a connection? 
> > I mean, standard (without ISN-prediction) SYN flooding is IMHO not
> > possible to a 2.2.14 kernel unless you set the
> > /proc/sys/net/ipv4/tcp_max_syn_backlog to a too high value.
> > 
> > Please, could you enlighten me once more?
> > 
> 
> Yeah, syncookie in the kernel 2.2.x can help tcp connection avoid syn
> flooding attach, I mean that it is work on the TCP layer. However, IPVS is
> working on IP layer, each entry (marking connection state) need 128 bytes
> effective memory. Random Syn-drop is to randomly drop some syn entry
> before running out of memory. It may help IPVS box survive even under a
> big distributed syn-flooding attach, but real servers still need setup
> syncookie to prevent themselves from syn-flooding attack.
> 
> > 
> > BTW.: What are the plans for transfering the ip_vs_masq_table from one
> > kernel to another one in case of a failover of the loadbalancer? Is
> > there already some idea or whatever?
> > 
> 
> I just thought an idea on transfering the state table, it might be good.
> We run a SendingState and a ReceivingState kernel_thread (daemons inside
> the kernel like kflushd and kswapd) on the primary IPVS and the backup
> respectively. Everytime the primary handles packets, it will put the
> change of state in a sending queue. The SendingState kernel_thread will
> wake up every HZ or HZ/2, to send the change in the queue to the
> ReceivingState kernel_thread through UDP packets, and clear the queue
> finally. The ReceivingState receives the packets and changes its own state
> table.
> 
> Since all is inside the kernel, it should be efficient, because the
> switching overhead between the kernel and the user space (both for the UDP
> communications and the read & write of those entries) can be avoided.
> 
> Any comments?
> 

        I see another problem. The OUTPUT state table looks very
wrong for me. The SR state checking looks incorrect in the context
of VS/NAT mode. If the SYN packet is forwarded from the Director
to the Real server, the Real server answers immediately with SYN
cookie and the state is changed to ES. So, under SYN flood with
SYN-cookies enabled we have ES states and not the SYN states.
May be the state table is wrong but it is not patched from the LVS.
Is the OUTPUT table correct? The change:

OUTPUT     SYN
        SR -> ES

        When the SYN-cookie (SYN+ACK) is sent we switch to ES
which is for very short interval after the initial SYN. And
ip_vs_random_drop_syn() can't find many entries.

        This can be avoided if the SYN packets are really dropped
before forwarded to the Real server and not when the entries are
dropped.

        May be we can make sltimer_handler() to set the drop rate
by this way:

0 - don't drop (normal)
1 - drop each (may be until the next second)
2 - drop 1/2
3 - drop 1/3
n - drop 1/n

        We can use simple counter: once zeroed we drop a SYN
frame.

        What about dropping UDP before the entry is created using
this rate ?

        Something like this where the drop is triggered by int!=0
        I didn't tested current implementation how it looks under
load.


Regards

--
Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>

<Prev in Thread] Current Thread [Next in Thread>