LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Upgrade lo:0 conflict from heartbeat 1.2.3 to 2.0.7

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Upgrade lo:0 conflict from heartbeat 1.2.3 to 2.0.7
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Thu, 01 Mar 2007 11:19:15 +0100
Brent Jensen wrote:
I have a cluster config for a master-master database, each with a slave:
Slave -- Master (primary) -- Master (secondary) -- Slave

I have a VIP for the primary master (read/write database) and another VIP for load balancing (both slaves and the secondary master).

I'm running the directors on the master servers. Therefore, all servers have lo:0 up.

Under version 1.2.3 and kernel 2.6.9-42.0.3 (redhat) things worked fine. Switching between the masters lo:0 seem to be preserved (I think). Therefore, load balancing worked on all 3 slaves (2 slaves, secondary master).

However, in version 2.0.7 and kernel 2.6.9-42.0.8, lo:0 disappears OR the VIP for the read server (eth0:1) does not get created if lo:0 exists. If lo:0 doesn't exist, eth0:1 gets created like it should. Both lo:0 and eth0:1 have the same IP (load balancing IP), therefore, it looks as though the VIP conflicts with lo:0.

I don't know if the configuration was correct under 1.2.3, but it worked flawlessly for over a year. Under version 2.0.7, there is no failover since the VIP doesn't get created for the load-balance IP or I have to manually start lo:0 (that's not failover!).

Haresources:
<host>  IPaddr::10.0.3.17/24/eth0
<host>  IPaddr::10.0.3.20/24/eth0

Could you try with IPaddr2?

Cheers,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>