LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: random SYN-drop function

To: Ratz <ratz@xxxxxx>
Subject: Re: random SYN-drop function
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx, cluster-list@xxxxxxxxxx
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Thu, 16 Mar 2000 10:07:47 +0800 (CST)


On Wed, 15 Mar 2000, Ratz wrote:

> I cannot get the point out of your new ip_vs_random_drop_syn function.
> At which point could such a function be important? Or what exactly has
> to occur that you want to drop a connection? 
> I mean, standard (without ISN-prediction) SYN flooding is IMHO not
> possible to a 2.2.14 kernel unless you set the
> /proc/sys/net/ipv4/tcp_max_syn_backlog to a too high value.
> 
> Please, could you enlighten me once more?
> 

Yeah, syncookie in the kernel 2.2.x can help tcp connection avoid syn
flooding attach, I mean that it is work on the TCP layer. However, IPVS is
working on IP layer, each entry (marking connection state) need 128 bytes
effective memory. Random Syn-drop is to randomly drop some syn entry
before running out of memory. It may help IPVS box survive even under a
big distributed syn-flooding attach, but real servers still need setup
syncookie to prevent themselves from syn-flooding attack.

> 
> BTW.: What are the plans for transfering the ip_vs_masq_table from one
> kernel to another one in case of a failover of the loadbalancer? Is
> there already some idea or whatever?
> 

I just thought an idea on transfering the state table, it might be good.
We run a SendingState and a ReceivingState kernel_thread (daemons inside
the kernel like kflushd and kswapd) on the primary IPVS and the backup
respectively. Everytime the primary handles packets, it will put the
change of state in a sending queue. The SendingState kernel_thread will
wake up every HZ or HZ/2, to send the change in the queue to the
ReceivingState kernel_thread through UDP packets, and clear the queue
finally. The ReceivingState receives the packets and changes its own state
table.

Since all is inside the kernel, it should be efficient, because the
switching overhead between the kernel and the user space (both for the UDP
communications and the read & write of those entries) can be avoided.

Any comments?

Thanks,

Wensong



<Prev in Thread] Current Thread [Next in Thread>