Re: [lvs-users] FW: Self Configuring High Availability LVS with Keepaliv

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>, " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] FW: Self Configuring High Availability LVS with Keepalived
From: "David Hinkle" <hinkle@xxxxxxxxxxxxxx>
Date: Tue, 23 Oct 2007 22:53:37 -0500
Eventually I did get this working today.   I keep the VIP on every lo in the 
cluster.  Every machine in the cluster starts as a backup director.  One of 
them will get elected as Master director by keepalived and all the rest will 
receive syncronization broadcasts from the master director.

The trick was, I used netfilter Marks to determine what traffic to load 
balance, and I configured keepalived to add the netfilter rules to iptables 
when it becomes a master and remove them when it's a backup.

This way, LVS on the backups does not try to load balance incoming connections 
a second time, and they can just act as real servers.

This gives me a cluster where all machines are peers.   One of them just 
happens to be the director at any given time.

I'm going to use this to provide 1 click clustering for our CIPAFilter 
products.   They already have the ability to detect each other via broadcasts, 
so after a little more glue tommorow, a customer with load problems will be 
able to slap a few more cipafilters into their rack, click one checkbox, and 
have a fully load balanced high availability cluster.

Great work on this stuff guys.  Honestly, I had no idea it was so powerful or I 
would have implemented it years ago.


-----Original Message-----
From: lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx on behalf of Joseph Mack NA3T
Sent: Tue 10/23/2007 7:34 PM
To: users mailing list.
Subject: Re: [lvs-users] FW: Self Configuring High Availability LVS with 
On Tue, 23 Oct 2007, David Hinkle wrote:

> I'm trying to put together a self configuring high 
> availability LVS.  Each of my units is a proxy server.  
> Each of them is identical, except a little configuration, 
> and I intend them to come up, elect a director, and serve 
> content.

this was a highly hoped for goal in the early days of LVS. 
The director isn't all that different to a realserver. When 
a director fails out, one of the realservers should be able 
to take over, presumably automatically. The service 
listening on the VIP would have to be shut down on going 
from realserver to director.

However one we got the two director failover working, people 
lost interest. One working scheme was enough, at least for 
the moment.

> I notice when I have a unit in BACKUP state, when I do 
> "ipvsadm" I see all the lvs stuff is configured in the 
> kernel.
> Does this mean that the unit will try to do load balancing 
> when a connection comes in instead of just ignoring it and 
> allowing the proxy server's user space software to process 
> it?

usually a director in backup state doesn't have the VIP, so 
there are no packets for it to ignore. If you sent it packet 
for the VIP backup director would loadbalance them.

> If so, Can I configure keepalived in such a way that the 
> ipvsadm stuff isn't set up until a unit goes to MASTER? 

it doesn't take any time to setup ipvsadm, so yes.

> And if so, will the connection state information 
> successfully synchronize?

once the demon starts and lets the partner demon in the 
other director know where it is


Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at
Homepage It's GNU/Linux!

<Prev in Thread] Current Thread [Next in Thread>