LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: LVS talk at LinuxExpo

To: Wensong Zhang <wensong@xxxxxxxxxxxx>
Subject: Re: LVS talk at LinuxExpo
Cc: linux-virtualserver@xxxxxxxxxxxx
From: Lars Marowsky-Bree <lmb@xxxxxxxxx>
Date: Thu, 27 May 1999 07:52:10 +0200
On 1999-05-27T09:08:09,
   Wensong Zhang <wensong@xxxxxxxxxxxx> said:

> Yes, the request packet must go through the load balancer. However,
> for Internet services such as http, the request packets are usually small,
> and the response packets are big. So, it has a little bit advantage, other
> advantage is that servers can be load balanced/shared and server
> failure can be masked quickly.

They are small, but the RTT is usually prohibitive. And something about this
design gives me the yikes - this model only makes sense if both the client and
the servers have good (at least in terms of delay) connections to the
director, which is usually not the case.

Really, 3/DNS is very cool. I keep forgetting, but Cisco Distributed Director
does something like this too.

> The load average, memory and swap usage, the number of processes,
> and the response time of the service is collected on each server, then
> compute those information and previous weight into a new weight
> (an estimation of server processing capacity), pass the weight to the
> LinuxDirector in the heartbeat channel periodically.

Yeah, I decided to go with the simpler "just the loadavg" approach, since I
figured that a low mem situation would almost certainly yield a higher loadavg
and thus a lower weight.

I made a mistake in my previous statement anyway: of course I don't mean
"loadavg x 100" as the weight, since this would actually favor the more busy
servers, but "1000 / loadavg" makes more sense ;-)

And, since I am building a more general load balancer, I would try to poll
this information from th client via SNMP instead of them talking actively to
the server - but thats a minor detail.

> check if the difference of previous and current weights exceeds a
> threshold, if so, pass current weights to the kernel, otherwise we
> don't interrupt its scheduling because passing weights requires some
> overhead.

Thats a good idea and not difficult to implement, since it is userspace code
;-)

> By the way, I will add the weighted round-robin scheduling into
> the VS patch for kernel 2.2 soon.

Good! Looking forward to it. Can I test it like, tomorrow? ;-)


Sincerely,
    Lars Marowsky-Brée
        
--
Lars Marowsky-Brée
Network Management

teuto.net Netzdienste GmbH - DPN Verbund-Partner

<Prev in Thread] Current Thread [Next in Thread>