LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

[lvs-users] LVS Use Sanity Check

To: <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: [lvs-users] LVS Use Sanity Check
From: <Darren.Mansell@xxxxxxxxxxxx>
Date: Tue, 8 Dec 2009 11:57:03 -0000
Hello everyone.

 

We are using LVS in quite a large way (volume, distribution and
importance) in solutions that we have implemented for our business. The
way we are using LVS seems to work very nicely but I would appreciate it
if anyone could look at our set up and see if they can see any potential
issues?

 

The stack we use is:

SLES 11 x64 + HA extension

OpenAIS

Pacemaker

Ldirectord

LVS

 

We use this stack to make highly-available and load-balance the
following services:

Tomcat

MySQL master-master replicated

Postfix

 

We use the same nodes for the real server and the virtual server using
gate in ldirectord (LVS DR). Typically we will just have 2 nodes to
provide service, load-balancing between the 2 with LVS and using OpenAIS
/ Pacemaker for HA with STONITH configured for fencing.

 

For the VIP we use the Pacemaker IPAddr2 resource with LVS_support
turned on. We put the VIP as an additional IP on lo and add 

net.ipv4.conf.all.arp_ignore = 1

net.ipv4.conf.eth0.arp_ignore = 1

net.ipv4.conf.all.arp_announce = 2

net.ipv4.conf.eth0.arp_announce = 2

to /etc/sysctl.conf 

 

When the VIP is added to eth0 by the IPAddr2 RA in Pacemaker, it also
removes it from lo on the active node. 

The VIP stays on lo on the passive node, accepting connections from LVS
on the active node doing the directing. The active node routes traffic
locally and remotely for load-balancing:

 

(on active node with main IP of 10.167.27.10):

TCP  10.167.27.100:80 wlc

  -> 10.167.27.10:80              Local   1      32         27

  -> 10.167.27.20:80              Route   1      32         36

 

If a failover occurs, the ldirectord resource is stopped on the active
node and then started on the other node, where ipvsadm rules are loaded
by ldirectord, the VIP is removed from lo and added to eth0 and traffic
is then routed locally and remotely.

 

As I said above, this seems to work absolutely fine in production use,
we rely heavily on it and haven't encountered any issues (that weren't
application related) that we haven't been able to resolve. The only
doubt we have is that this kind of set up doesn't seem to be
commonplace, everyone else seems to use separate real servers and
virtual servers and I have to wonder why more people don't run like
this.

 

Many thanks for reading.

Darren Mansell.

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>