LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] LVS with Weighted Least-Connection Scheduling

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] LVS with Weighted Least-Connection Scheduling
From: Guy Waugh <gwaugh@xxxxxxxxxx>
Date: Thu, 21 Feb 2008 16:09:12 +1100
Sebastian Krueger wrote:
> Joe,
> 
> I checked out persistent=1 and from the man page of ldirectord I find this:
> 
> ----------
> persistent = n
> 
> Number of seconds for persistent client connections.
> ----------
> 
> So it wouldn't be that I think.
> 
> fwmark sounds interesting, I will google about that one in a moment to find
> out.
> 
> Guy,
> 
>>> However, unfortunately traffic is assigned to an arbitrary real server
> for
>>> each virtual server if there are no other connections already
> established.
>> Yes - presumably the LVS director is load-balancing requests between the
>> three real servers. The weights probably only come into it once there
>> are a few connections happening.
> 
> I have a feeling you might be right. Maybe I'll end up having to change the
> scheduling algorithm myself to include the weights if no clients are
> connected.
> 
> What I'm trying to achieve is to load-balance Java Message Service (JMS)
> connections.
> 
> So we have multiple JMS clients and multiple JMS servers. LVS does the
> load-balancing
> based on IP address affinity, so in order to load-balance a single JMS
> client across many
> JMS virtual servers the solution was to create multiple virtual servers and
> have the
> JMS client round-robin over all the available JMS virtual servers.
> 
> Currently these are set to 3, but the LVS configuration could easily be
> extended to
> 
> virtual=10.32.30.125:18007
>         real=192.168.0.10:18007 masq 10
>         real=192.168.0.20:18007 masq 5
>         real=192.168.0.30:18007 masq 5
>         real=192.168.0.40:18007 masq 5
>         real=192.168.0.50:18007 masq 5
>         real=192.168.0.60:18007 masq 5
> virtual=10.32.30.125:18008
>         real=192.168.0.10:18007 masq 5
>         real=192.168.0.20:18007 masq 10
>         real=192.168.0.30:18007 masq 5
>         real=192.168.0.40:18007 masq 5
>         real=192.168.0.50:18007 masq 5
>         real=192.168.0.60:18007 masq 5
> 
> So now a single JMS client can load-balance itself across 2 virtual servers
> and many
> JMS clients are load-balanced across all real-servers so the over-all load
> is
> nicely spread across all real servers.
> 
> So let's say a single JMS client connects to both virtual JMS servers.
> If no other clients are connected, he will be scheduled to
> the real server on 192.168.0.10 on both virtual servers. This is because LVS
> chooses the first real server regardless of the weighting if there are no
> existing
> connections.
> 
> However, since each received JMS message means a huge amount of processing,
> it is important that each request goes to different real servers (as much
> as possible).
> 
> Is there anyone out there that has ever tried to use LVS for JMS
> connections?

Hi again,

I don't know anything about JMS, but if it is possible for one JMS 
client to be configured to use more than one JMS server at the same 
time, then I can't help thinking that you'll only really need the 
three-virtual-service-address model if you have around the same number 
of clients as real servers. With, say, twice the number of clients as 
real servers, all real servers are going to have an average of two 
clients anyway (assuming the weights for all of them are equal), so 
splitting parts of each client's request between real servers won't 
achieve anything.

I'm only familiar with using LVS in a way such that each client is 
load-balanced to one real server... perhaps you would need Layer 7 load 
balancing to load-balance according to certain attributes of the request.

Alternatively, using a persistence value of zero (if this is indeed 
possible), so that each request is load-balanced afresh, might be what 
you're wanting.

Cheers,
Guy.

> 
> Thanks for your help!
> 
> Regards, Sebastian.

<snip>


<Prev in Thread] Current Thread [Next in Thread>