LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] New to LVS

To: Wensong Zhang <wensong@xxxxxxxxxxxx>
Subject: Re: [lvs-users] New to LVS
Cc: Doug Bagley <doug@xxxxxxxx>, lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
Date: Wed, 16 Feb 2000 08:44:50 +0200 (EET)
        Hello,

On Wed, 16 Feb 2000, Wensong Zhang wrote:

> > It seems to me it would be useful in some cases to use the total number
> > of connections to a real server in the load balancing calculation, in
> > the case where the real server participates in servicing a number of
> > different VIPs.
> >
>
> Yeah, it is true. Sometimes, we need tradeoff between
> simplicity/performance and functionality. Let me think more about
> this, and probably maximum connection scheduling together together
> too. For a rather big server cluster, there may be a dedicated load
> balancer for web traffic and another load balancer for mail traffic,
> then the two load balancers may need exchange status periodically, it
> is rather complicated.

        Yes, if a real server is used from two or more directors
the "lc" method is useless.

>
> Actually, I just thought that dynamic weight adaption according to
> periodical load feedback of each server might solve all the above
> problems.

        From my experience with real servers for web, the only
useful parameters for the real server load are:

        - cpu idle time

                If you use real servers with equal CPUs (MHz)
                the cpu idle time in percents can be used.
                In other cases the MHz must be included in
                a expression for the weight.

        - free ram

                According to the web load the right expression
                must be used including the cpu idle time
                and the free ram.

        - free swap

                Very bad if the web is swapping.

        The easiest parameter to get, the Load Average is
always < 5. So, it can't be used for weights in this case.
May be for SMTP ? The sendmail guys use only the load average
in sendmail when evaluating the load :)

        So, the monitoring software must send these parameters
to all directors. But even now each of the directors use
these weights to create connections proportionally. So,
it is useful these parameters for the load to be updated
in short intervals and they must be averaged for this
period. It is very bad to use current value for a parameter
to evaluate the weight in the director. For example, it
is very useful to use something like "Average value for
the cpu idle time for the last 10 seconds" and to broadcast
this value to the director on each 10 seconds. If the
cpu idle time is 0, the free ram must be used. It depends
on which resource zeroed first: the cpu idle time or the
free ram. The weight must be changed slightly :)

        The "*lc" algorithms help for simple setups, eg.
with one director and for some of the services, eg http,
https. It is difficult even for ftp and smtp to use these
schedulers. When the requests are very different, the
only valid information is the load in the real server.

        Other useful parameter is the network traffic (ftp).
But again, all these parameters must be used from the director
to build the weight using a complex expression.

        I think the complex weight for the real server
based on connection number (lc) is not useful due to the
different load from each of the services. May be for
the "wlc" scheduling method ? I know that the users
want LVS to do everything but the load balancing is
very complex job. If you handle web traffic you can be happy
with any of the current scheduling methods. I didn't tried
to balance ftp traffic but I don't expect much help from *lc
methods. The real server can be loaded, for example, if you
build new Linux kernel while the server is in the cluster :)
Very easy way to switch to swap mode if your load is near 100%.

        But everything is theory and must be tried in
production :) We must be ready for surprises :)

Regards

--
Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>

<Prev in Thread] Current Thread [Next in Thread>