LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: LVS and dynamic routing

To: jcouzens@xxxxxx,<lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: LVS and dynamic routing
From: Horms <horms@xxxxxxxxxxxx>
Date: Fri, 21 May 2004 10:34:02 +0900
On Mon, May 17, 2004 at 03:43:53PM -0700, James Couzens wrote:
> On Mon, 2004-05-17 at 14:06, Patrick LeBoutillier wrote:
> > Hi all,
> > 
> > We have a production environment that has parallel networks. Each machine
> > has 2 network cards and runs gated. Services listen on an address on the
> > loopback and gated makes them available (I'm not too sure how gated and/or
> > dynamic routing works...)
> > 
> > I have to set up a cluster of two such machines (both will be directors and
> > also realservers).
> > 
> > My question is: if heartbeat starts and brings up the alias on eth0 and 
> > then
> > the network eth0 is connected to fails, will the director still be able to
> > deal with the packets even if they start coming in from eth1?
> > 
> > If not, what kind of configuration should I be using?
> > 
> > 
> > Thanks a lot,
> > 
> > Patrick
> 
> You really ought to have a look at the keepalived project.  The opening
> paragraph on the KA website says it all:
> 
> > The main goal of the keepalived project is to add a strong & robust
> > keepalive facility to the Linux Virtual Server project. This project
> > is written in C with multilayer TCP/IP stack checks. Keepalived
> > implements a framework based on three family checks : Layer3, Layer4 &
> > Layer5/7. This framework gives the daemon the ability of checking a
> > LVS server pool states. When one of the server of the LVS server pool
> > is down, keepalived informs the linux kernel via a setsockopt call to
> > remove this server entrie from the LVS topology. In addition
> > keepalived implements an independent VRRPv2 stack to handle director
> > failover. So in short keepalived is a userspace daemon for LVS cluster
> > nodes healthchecks and LVS directors failover.
> 
> keepalived handles effortlessly the functionality you have outlined in
> your post quoted above, and makes the administration, and overall HA-LVS
> experience significantly less complicated.  With a single configuration
> file to facilitate your entire HA solution its a whiz to setup and fine
> tune.  Without keepalived a HA-LVS solution is like making a hotdog by
> going and buying all the individual ingredients from different places
> rather than just getting it all from one vendor ;)   Give it a whirl and
> you'll see more of what I'm talking about. 

I think you misunderstand the original question.
Then again, perhaps I do.

The question is about how LVS handles the VIP moving around.
And in particular, packets for the VIP coming in eth0, then
all of a sudden eth1.

If that is the question, then the answer is that LVS doesn't really
care. rp_filter would probably need to be disabled, and perhaps a
few other routing tweaks. But fundamentally if the box could route
the traffic, then LVS could load balance it. The only slight difference
is that LVS requires traffic for the VIP to be local, which usually
means that it needs to be bound to a local interface. Any local
interface. 

Actually you can get around this by moving ip_vs_in from the LOCAL_IN
hook to the PREROUTING hook. I tried that briefly once and it seemed to
work. But it most likely has some interesting side effects.

-- 
Horms
<Prev in Thread] Current Thread [Next in Thread>