LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: LVS talk at LinuxExpo

To: Lars Marowsky-Bree <lmb@xxxxxxxxx>
Subject: Re: LVS talk at LinuxExpo
Cc: Joseph Mack <mack@xxxxxxxxxxx>, linux-virtualserver@xxxxxxxxxxxx
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Thu, 27 May 1999 09:08:09 +0800

Lars Marowsky-Bree wrote:

> I just repeat my comment on the geographic director, maybe we can get some
> discussion here: It was proposed that the "tunneling" loadbalancing can be 
> used
> for this.
>
> You _really_ do not want to do that via tunneling, since the packets to the
> server would still have to go through your loadbalancer at all times, even
> though the replies go direct.
>

Yes, the request packet must go through the load balancer. However,
for Internet services such as http, the request packets are usually small,
and the response packets are big. So, it has a little bit advantage, other
advantage is that servers can be load balanced/shared and server
failure can be masked quickly.

However, the biggest problem is that it is hard for the director to select
which server is fast response and close to the client if servers are
graphically distributed.

>
> You want to extend bind to not only do round robin DNS, but to answer a query
> based on the load of the respective clusters and, even though this is slightly
> more interesting, proximity to the client. (The F5 3/DNS solution does this)
>
> As soon as we get load-informed load balancing (which is actually easy to do,
> as soon as someone implements a command to change the weights on the fly, and
> then you just poll for load-average (or some other sensible metric) every x
> seconds and set the weight to 100 x loadavg, which should actually work
> reasonably well), this is simple: Just do weighted round-robin in the DNS
> server and poll the clusters for avg(loadavg) every minute or so.
>

My design of load-informed load balancing is more than that, but I
hasn't got time to implement it. :(

The load average, memory and swap usage, the number of processes,
and the response time of the service is collected on each server, then
compute those information and previous weight into a new weight
(an estimation of server processing capacity), pass the weight to the
LinuxDirector in the heartbeat channel periodically. The LinuxDirector
check if the difference of previous and current weights exceeds a
threshold, if so, pass current weights to the kernel, otherwise we
don't interrupt its scheduling because passing weights requires some
overhead.

By the way, I will add the weighted round-robin scheduling into
the VS patch for kernel 2.2 soon.

Wensong




<Prev in Thread] Current Thread [Next in Thread>