LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Upgrade lo:0 conflict from heartbeat 1.2.3 to 2.0.7

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Upgrade lo:0 conflict from heartbeat 1.2.3 to 2.0.7
From: Brent Jensen <brent@xxxxxxxxxxx>
Date: Wed, 28 Feb 2007 02:16:16 -0700
I have a cluster config for a master-master database, each with a slave:
Slave -- Master (primary) -- Master (secondary) -- Slave

I have a VIP for the primary master (read/write database) and another VIP for load balancing (both slaves and the secondary master).

I'm running the directors on the master servers. Therefore, all servers have lo:0 up.

Under version 1.2.3 and kernel 2.6.9-42.0.3 (redhat) things worked fine. Switching between the masters lo:0 seem to be preserved (I think). Therefore, load balancing worked on all 3 slaves (2 slaves, secondary master).

However, in version 2.0.7 and kernel 2.6.9-42.0.8, lo:0 disappears OR the VIP for the read server (eth0:1) does not get created if lo:0 exists. If lo:0 doesn't exist, eth0:1 gets created like it should. Both lo:0 and eth0:1 have the same IP (load balancing IP), therefore, it looks as though the VIP conflicts with lo:0.

I don't know if the configuration was correct under 1.2.3, but it worked flawlessly for over a year. Under version 2.0.7, there is no failover since the VIP doesn't get created for the load-balance IP or I have to manually start lo:0 (that's not failover!).

Haresources:
<host>  IPaddr::10.0.3.17/24/eth0
<host>  IPaddr::10.0.3.20/24/eth0

LDirector:
virtual = 10.0.3.20:3306
        real=10.0.3.13:3306 gate 12 ?
        real=10.0.3.21:3306 gate 13 ?
        real=10.0.3.23:3306 gate 12 ?
        fallback=127.0.0.1:3306 gate 1
        service=http
        checktype=negotiate
        protocol=tcp
        scheduler = wrr

lo:0:
DEVICE=lo:0
IPADDR=10.0.3.20
NETMASK=255.255.255.255
NETWORK=10.0.3.0
BROADCAST=10.0.3.255
ONBOOT=yes
NAME=loopback


What's different? Kernel? Heartbeat? Is there another config option to make it work?

Thanks.


<Prev in Thread] Current Thread [Next in Thread>