LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] LVS Use Sanity Check

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] LVS Use Sanity Check
From: David Warden <warden@xxxxxxxxxxx>
Date: Tue, 8 Dec 2009 10:17:56 -0500
In case this is a no-top-post list, my response is below.

On Dec 8, 2009, at 6:57 AM, <Darren.Mansell@xxxxxxxxxxxx> 
<Darren.Mansell@xxxxxxxxxxxx> wrote:

> Hello everyone.
> 
> 
> 
> We are using LVS in quite a large way (volume, distribution and
> importance) in solutions that we have implemented for our business. The
> way we are using LVS seems to work very nicely but I would appreciate it
> if anyone could look at our set up and see if they can see any potential
> issues?
> 
> 
> 
> The stack we use is:
> 
> SLES 11 x64 + HA extension
> 
> OpenAIS
> 
> Pacemaker
> 
> Ldirectord
> 
> LVS
> 
> 
> 
> We use this stack to make highly-available and load-balance the
> following services:
> 
> Tomcat
> 
> MySQL master-master replicated
> 
> Postfix
> 
> 
> 
> We use the same nodes for the real server and the virtual server using
> gate in ldirectord (LVS DR). Typically we will just have 2 nodes to
> provide service, load-balancing between the 2 with LVS and using OpenAIS
> / Pacemaker for HA with STONITH configured for fencing.
> 
> 
> 
> For the VIP we use the Pacemaker IPAddr2 resource with LVS_support
> turned on. We put the VIP as an additional IP on lo and add 
> 
> net.ipv4.conf.all.arp_ignore = 1
> 
> net.ipv4.conf.eth0.arp_ignore = 1
> 
> net.ipv4.conf.all.arp_announce = 2
> 
> net.ipv4.conf.eth0.arp_announce = 2
> 
> to /etc/sysctl.conf 
> 
> 
> 
> When the VIP is added to eth0 by the IPAddr2 RA in Pacemaker, it also
> removes it from lo on the active node. 
> 
> The VIP stays on lo on the passive node, accepting connections from LVS
> on the active node doing the directing. The active node routes traffic
> locally and remotely for load-balancing:
> 
> 
> 
> (on active node with main IP of 10.167.27.10):
> 
> TCP  10.167.27.100:80 wlc
> 
>  -> 10.167.27.10:80              Local   1      32         27
> 
>  -> 10.167.27.20:80              Route   1      32         36
> 
> 
> 
> If a failover occurs, the ldirectord resource is stopped on the active
> node and then started on the other node, where ipvsadm rules are loaded
> by ldirectord, the VIP is removed from lo and added to eth0 and traffic
> is then routed locally and remotely.
> 
> 
> 
> As I said above, this seems to work absolutely fine in production use,
> we rely heavily on it and haven't encountered any issues (that weren't
> application related) that we haven't been able to resolve. The only
> doubt we have is that this kind of set up doesn't seem to be
> commonplace, everyone else seems to use separate real servers and
> virtual servers and I have to wonder why more people don't run like
> this.
> 
> 
> 
> Many thanks for reading.
> 
> Darren Mansell.
> 
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
> 
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
> 


I think you'll find that it comes down to the nature of the storage that the 
load-balanced services depend on. If you are running a service in Pacemaker 
that really needs data fencing/STONITH, such as DRBD or a database server, then 
your setup is the way to go.

At the small (~5000 undergrad) University I work for, our load balancing setup 
would fall in your "common" category of having separate real servers and 
virtual servers. We primarily use LVS for services where data fencing is not a 
large concern - IMAP, POP, HTTP(S).  Our basic thinking comes down to "We have 
a bunch of identical real servers and it doesn't matter if we lose one because 
they are just conduits to data on the SAN." In that case, it's just easier not 
to have the real servers be cluster members.

I don't think there's anything wrong with your setup. I personally use 
arptables instead of sysctl to prevent the real servers from answering ARP 
requests to the VIP, but that's a personal preference, nothing more.

-David Warden
SUNY Geneseo
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>