On Mon, 2009-12-21 at 11:21 -0500, Eric Renfro wrote:
> I'm trying to resolve a current problem I have setting up a pair of LVS
> load balancing servers using heartbeat and ldirector under Gentoo.
> I am using heartbeat 2.0.8 on two servers and the heartbeat and
> ldirector setup is not very extensive but should be working better than
> it is. I will provide complete configurations, minus IP's themselves,
> but to explain the problem up front, the issues I'm having is rather
> Our servers are named simply, network1, and network2, which I will use
> to explain the issue.
> How I am discovering these issues is when I shut down either network1 or
> network2's heartbeat process, it successfully releases the IP and passes
> it on to the other to take over. It does this rather quickly as
> expected, however, when it brings up ldirector, that is when the
> problems begin. We have two clusters of three webservers each, on both
> http and https ports. On network1, it immediately brings up the first
> cluster that was setup with all three RIP nodes active but inaccessible.
> All the others are weighted to 0 under a weighted-based setup, otherwise
> they are non-existent and going to the fallback server RIP initially.
> For about 5-10 minutes the replaced heartbeat+ldirector server has heavy
> CPU load with ksoftirq/0 and ksoftirq/1 being the culprits of the active
> CPU load, atop confirms this by having 3 irq's showing at 200%, 100%,
> and 100%, last 5-10 minutes.
Just an idea, but why don't you run ldirectord on both network1 and 2?
That way the LVS table will already set up when the failover occurs. If
you run the sync daemons as well, chances are that nobody will notice a
Please read the documentation before posting - it's available at:
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users