LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: maxconns per real server

To: Roberto Nibali <ratz@xxxxxx>
Subject: Re: maxconns per real server
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx, Benoit Gaussen <ben@xxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Tue, 2 Jul 2002 16:37:16 +0300 (EEST)
        Hello,

On Tue, 2 Jul 2002, Roberto Nibali wrote:

> Again, Julian, it is not compulsory. You can choose to have or
> not have this feature at compilation time. I can also write a
> text about the security implications so people selecting this
> feature are aware of the problems. Since I use LVS a bit in a
> different framework as other people, this patch has helped me
> fulfilling highly important SLA's with customers which in turn
> (if the feature wouldn't have been present) would have chosen
> a commercial, proprietary solution.

        I believe, it is not more than marketing trick, at least
for Layer 4 balancers. You know, we should always build setups
not ignoring the security :)

> > We can search the solution also in running agents in the real
> > servers that can really tell us how hurt these connections, may
>
> You're dead even before they get to send you that information :)

        If such big attack reaches LVS then the threshold patch will
hold the servers down for long time. We hope on QoS Ingress to
avoid such bad things to happen but it is too late: the counters
are increased.

> > be web1 does not care about 100conns/sec while web2 is full
> > with 10conns/s. For this, you have to enter the cluster software
> > world :)
>
> Or you enter ratz' world. It works perfectly for me. And the
> patch is as non-intrusive as it only can be.

        Hm. As we know it is hard to define specific QoS rules
for the real servers. Why not to create this threshold for the
whole virtual service, something like drop_packet but per-service
and controlled dynamically (setsockopt, as always). In theory,
WLC should keep the connections equally loaded (according to the
weight). WLC guarantees that an overloaded server is not used (much).
It looks too pedantic to define hard limits for real servers.
If we have too many connections to one virtual service we apply
a drop rate (or may be something like RED in the QoS world). If
we want the effect of the threshold patch, we stop to accept new
connections if the conns exceed the threshold. IMO, instead of
maintaining permanent thresholds for the real servers we can
allow the user space to maintain dynamic thresholds for the
virtual service. We can even put the estimators in the game, they
already know the history of each service for the last seconds.
But we need something simple, useful for different cases.
If we play with the virtual service then we don't need to touch
the schedulers about thresholds. The only case where one
real server can accept relatively huge number of connections
is when persistence is used. This is the only place where I
see the per-RS thresholds useful. The only danger is not to drop
traffic which can be in fact served. It looks like we are trying
to implement something which is missing in QoS Ingress (drop_packet
is another example, so may be it is going to die). And this is only
the director's point of view: we see the impact by only analyzing
the packet rates while the real servers have different view.
Of course, it is nearly the same for plain web traffic.

        Also, long time ago we talked about new scheduler playing
with the estimators. Can we implement something about it? It
should work for traffic sensitive applications, eg. ftp.

> Best regards,
> Roberto Nibali, ratz

Regards

--
Julian Anastasov <ja@xxxxxx>



<Prev in Thread] Current Thread [Next in Thread>