LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: heartbeat in back-up balancer

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: heartbeat in back-up balancer
From: Juri Haberland <list-linux.lvs.users@xxxxxxxxxxx>
Date: Sun, 4 Aug 2002 21:11:28 +0000 (UTC)
carla quiblat <carlaq@xxxxxxxxxxxxxxxx> wrote:
> 
> Hi there,
> 
> I'd like to ask how the back-up balancer takes over the haresources of
> the main balancer? Do both balancers contain the same haresources when
> heartbeat is started? Which part of ha.cf says that it's a master or the
> slave?

You are supposed to have identical haresources files on both balancers.
So you have the _same_ node name in both files. This name determins the
master node.

> The lvs script that I wrote based on examples from the
> documentations already configures the VIP of the machine, and this lvs
> script is called or used by the haresources file. So that if both
> heartbeat are ran in both balancers, they both use the same
> VIP at the same time which causes a conflict. However, if I don't include
> it in lvs script, it doesn't get configured by heartbeat anyway. I get
> this error:

Heartbeat should start the resources only on one host at a time.
If it does something different then there is a configuration error.

>       heartbeat: 2002/08/05_02:26:37 ERROR: unable to find an interface
> for 192.168.0.1
> 
> Are there other options for the "IPaddr" portion of the haresources? Like
> maybe IPaddr::192.168.0.1::eth0:110?
> 
> Where do I tell heartbeat or ldirectord to use this certain interface for
> the VIP (e.g. eth0:110)?

I'm not quite sure whether the IPaddr script can do this. But you can
use your on script. However, it is important that the first script after
the node name returns "running" or "stopped" if called with the
parameter "status"!

If you don't get it to work then please post your ha.rc and haresources
files from both hosts (if possible without all the comments).

Cheers,
Juri

-- 
Juri Haberland  <juri@xxxxxxxxxxxxxx> 



<Prev in Thread] Current Thread [Next in Thread>