Re: [lvs-users] LVS to run as a fail over (2 servers, active-passive) ho

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] LVS to run as a fail over (2 servers, active-passive) how?
From: Janusz Krzysztofik <jkrzyszt@xxxxxxxxxxxx>
Date: Wed, 19 Dec 2007 11:24:55 +0100
Graeme Fowler napisał(a):
> On Tue, 2007-12-18 at 13:23 +1100, Adam Niedzwiedzki wrote:
>> I'm running (LVS via keepalived) and it's humming along nicely.
> Good stuff.
>> I have a couple of clients that are just using wrr and rr for the scheduling
>> across their boxes.
>> I have another client that has requested just a basic failover solution,
>> they don't want to load balance across their serves they just want failover.
> Right...
>> How do I setup LVS (or the config in keepalived) to do this. Send all
>> requests to one server, unless it disappears then failover to the other, on
>> it's return fail back.
> [see the keepalived.conf man page, or the SYNOPSIS included in the docs
> for terminology below if you haven't seen it before]
> There are several ways to achieve this. The most basic is to setup the
> virtual_server instance with only a single real_server, and then define
> a sorry_server which will take over when the single real_server fails.
> The issue with this is that the sorry_server isn't healthchecked, as it
> isn't part of the normal topology. You have to guarantee absolutely that
> it will be (a) up, and (b) able to handle your load upon failover.
> The second, slightly more complex method is to define two real_servers,
> but make one have a weight of (for example the maximum) 65535, the other
> a weight of 1. Obviously with WRR this means 1 in 65535 requests gets
> handled by your "spare" server; whether your users would accept that is
> up to them.
> The third method would be to use a combination of weight, the wrr
> scheduler, persistence, and the sysctls controlling the behaviour of the
> persistence templates when a real_server is removed or becomes
> quiescent. These are:
> /proc/sys/net/ipv4/vs/expire_nodest_conn
> /proc/sys/net/ipv4/vs/expire_quiescent_template
> The first expires all entries for a given real_server when it is removed
> from the pool; the second removes all entries for a given real_server
> when it becomes quiescent (ie. weight becomes 0).

One more method: use sed scheduler. Real world example:

FWM  134479872 sed
   -> DSL1:0                       Route   6      0          0
   -> Dialog:0                     Route   6000   52         83

If you set weight for active ("Dialog" in my case) high enough, passive 
should be used only if active is down.


<Prev in Thread] Current Thread [Next in Thread>