Re: [lvs-users] Weighting by request address?

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] Weighting by request address?
From: "David Dyer-Bennet" <dd-b@xxxxxxxx>
Date: Fri, 25 Jul 2008 11:18:02 -0500 (CDT)
On Fri, July 25, 2008 10:15, Joseph Mack NA3T wrote:
> On Fri, 25 Jul 2008, David Dyer-Bennet wrote:
>> I'm looking at an application for LVS that requires scheduling that I'm
>> not sure is available.
> what you want isn't easily done (at least in the few minutes
> since I started looking at your posting) by load balancing
> at the network level, which is where LVS does its balancing.

So far as I can see all the information can be easily made available
(through configuration; I don't expect to automatically figure out which
services use how many threads).  The general idea of distributing load
applies the same way, I just want more individuality in each service.

Here's a model that does what I want (but isn't supported by any current

Each real server has a "resource value" representing how much work it can
do (I'd use something that was roughly cores * MHz, to make it easy to
scale performance of add-on systems later when the cluster wasn't
homogenous).  (I'd probably multiply this by Finagle's constant k=2,
overcommitting everything 100%, to allow for startup and shutdown time
when it isn't really using the full resources).

Each virtual service has a "resource cost" representing how much work it
represents, in the same units.

For each real server, the scheduler keeps track of the resource cost of
the currently active connections.

When considering where to assign a new connection, the LB can assign it to
any server whose current connections costs add up to less than its
resource value.   Round robin, or finding the server with the biggest
difference, might be desirable for different reasons just as they are with
the current schedulers sometimes.  If there's no available server, refuse
the connection.  (The finagle constant above would need to be adjusted
through experience, to avoid refusing connections unless we're actually
maxed out.)

Seems like that accomplishes the load balancing of varying loads part that
I need; it doesn't touch the prioritization, but I think you gave me an
idea for that below.  With moderate configuration (one value per virtual
service, one value per real server) it seems to cover what I need.

>> The batch runs can be distinguished based on the IPs it's
>> requested from, or we could set things up so it passes its
>> requests to a different request address.
> weights are relative, so even if you had two VIPs, all the
> users of the batch VIP would be balanced with each other,
> and all the users of the interactive VIP would balance with
> each other. The two virtual services (here VIPs) would be
> independant of each other.

I was wondering if it worked that way.  That doesn't make much sense to me
-- in real servers all processes draw from the same resource pools, so it
doesn't make sense to firewall them that thoroughly from each other in
scheduling.  But I imagine most real-world uses have only one virtual

>> An added complexity is that some of the services will use
>> multiple cores for a request, and others will not; raw
>> counts of actual connections are not an adequate way to
>> estimate the load currently being handled by a server.
> the kernel virtualises the hardware. LVS has no idea how
> many CPUs are underneath. LVS only knows about connections
> on the network.

Right, I don't expect it to magically know about the servers or the costs
of the virtual services, I had expected to configure those.

>> Do you guys see a good way to approach this short of writing my own
>> scheduler module?
> I'm not sure whether it's even possible to write a scheduler
> to do what you want, since an LVS scheduler is operating at
> the network level.

I don't see why "network level" is a constraint, it knows the service
being requested and the list of servers to be considered for referring the
request to.

> Your problem is at the application level. Can your batch
> scheduler nice its jobs?

Thank you!  I'm feeling stupid, I hadn't thought of using nice as the way
to get the priorities I needed.  I was focused on controlling whether the
requests were accepted in the first place, and hadn't thought about
accepting them but dealing with them at a lower priority.

The batch scheduler isn't local, but either by having separate virtual
services for the batch modes of the services, or including that in the
request header (I own both ends and it's all internal, so I'm not worried
about people stealing priority) I can get them to a separate place where I
can nice them.  These aren't memory-intensive, so having extra processes
sitting around for a little while at low priority won't be hurting me

I think you've solved my prioritization problem.  Thank  you!

David Dyer-Bennet, dd-b@xxxxxxxx;

<Prev in Thread] Current Thread [Next in Thread>