LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: maxconns per real server

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: maxconns per real server
Cc: Benoit Gaussen <ben@xxxxxxxxxx>
From: Roberto Nibali <ratz@xxxxxx>
Date: Tue, 02 Jul 2002 13:26:49 +0200
Hi,

>         The word "little" does not make the attackers happy.
> They waste their time for "big problems" :))) It seems you see
> where is the flaw in such limits applied. You are going to die
> on the first attack.

True, but I'm also not exposing a LVS directly to the Internet ;)
Most of our LVS deploys sit behind application level firewalls
or at least behind a QoS packetfilter. I've _never_ experienced
problems using my patch. Only in the lab with your testlvs. But
this is not real life.

> > Is that a feature that may interest LVS people, and that may be included in
> > next LVS releases (in this form or other) ?
> 
>         Ratz has patch for this, from long time:
> 
> http://www.linuxvirtualserver.org/~ratz/

Exactly. And if I get 1-2 days spare time I will forward port
it to 2.4.x. It seems that there is little interest in this
feature although all commercial load balancer have it.
 
>         I'm still not sure whether he persuaded himself about
> how efficiently can be applied such policy. It makes only the
> ipvsadm output happy about limiting the real servers. I'm still

Again, Julian, it is not compulsory. You can choose to have or
not have this feature at compilation time. I can also write a
text about the security implications so people selecting this
feature are aware of the problems. Since I use LVS a bit in a
different framework as other people, this patch has helped me
fulfilling highly important SLA's with customers which in turn
(if the feature wouldn't have been present) would have chosen
a commercial, proprietary solution.

> not sure one can run it safely without worrying about problems.

No you have the problems of flooding the service but while this
badly attacks your LVS cluster I'm sure other services along the
chain of such events suffer serious network related damage too.

> Enter the QoS world, there are solutions there for such attacks.

That's what I do. A carefully selected discipline with a big
enough bucket reduces the problems in an immense way.

> We can search the solution also in running agents in the real
> servers that can really tell us how hurt these connections, may

You're dead even before they get to send you that information :)

> be web1 does not care about 100conns/sec while web2 is full
> with 10conns/s. For this, you have to enter the cluster software
> world :)

Or you enter ratz' world. It works perfectly for me. And the
patch is as non-intrusive as it only can be.

A last word to the author of the patch:
As you can see the difference in our approaches to address this
problem lies in the fact that you only specify a maxconns. This
however opens a new problem: The so called threshold bouncing
problem. Imagine having 4 RS with maxconns=1000 and this is just
what you finetuned after some tests. Now it can occur that the
connection rate will be around 4000 and that you will encounter
RS being quiesced, being put back in, being quiesced ... and so
on. This creates a very ugly load balance graph over time. My
approach indirectly includes yours. As you can see I've done a
high/low threshold limitation. This flattens the threshold
bouncing I was experiencing on most setups. Think about it ;)

Best regards,
Roberto Nibali, ratz
-- 
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc


<Prev in Thread] Current Thread [Next in Thread>