LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Help with keepalived setup!!

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] Help with keepalived setup!!
From: Graeme Fowler <graeme@xxxxxxxxxxx>
Date: Fri, 06 Jul 2007 20:14:37 +0100
On Fri, 2007-07-06 at 20:30 +0400, Ramsurrun Visham wrote:
> I wanted to know if the following is possible with Keepalived:

Probably. Many things are :)

> I have 5 Pcs - 3 active (master) nodes and 2 standby (backup) nodes,
> all connected to a switch. The 2 standby nodes have to provide
> failover to any of active nodes in case one fails. Then if any of the
> active nodes fails again, the second should provide failover for that.
> If I use only one standby node for failover, then I just have to make
> the standby node backup for all the active nodes. But I can't do it
> when there are two or more since the values for the priority parameter
> will be higher on one particular node. Hence, all VIPs of failed
> active nodes will go to one standby PC. I don't want to dedicate one
> standby node for each active node.

Just to check: you're not talking about any sort of load balancing here,
just HA (high availability).

Yes, it is possible.

It's possible to have a set of nodes act as a failover node for all the
others, in order. To give you a conceptual overview:

node1: VIP1  priority 100
       VIP2  priority  75
       VIP3  priority  50

node2: VIP1  priority  75
       VIP2  priority 100
       VIP3  priority  75

node3: VIP1  priority  50
       VIP2  priority  50
       VIP3  priority 100

In normal operation, VIP1 is on node1, VIP2 node 2, VIP3 node3.

If node1 fails (alone), node2 takes VIP1.
If node2 fails (alone), node1 takes VIP2.
If node3 fails (alone), node2 takes VIP3.

If nodes 1 & 2 fail, VIP1 & 2 go to node3. I don't need to spell the
rest out.

This can easily be extended to a 5 node setup, but it's a bit of a waste
to have two completely unused failover servers in my opinion.

If you look at the following vrrp_instance snippet you should see what
I'm talking about:

vrrp_instance VIP1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100  # 75 on node2, 50 on node3
    advert_int 3
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.1
    }
}

You should be able to combine that with the conceptual example to get
something working.

Of course, if you *are* doing load balancing... nothing changes really
from the HA point of view, but you have to take other things into
account.

Graeme



<Prev in Thread] Current Thread [Next in Thread>