LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

RE: more failover detail...

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: RE: more failover detail...
Cc: Steve Gonczi <Steve.Gonczi@xxxxxxxxxxxxxxxxxx>
From: Stephen Rowles <spr@xxxxxxxxxxxxxxx>
Date: Tue, 12 Dec 2000 09:29:48 +0000
At 20:26 11/12/2000 -0500, you wrote:
IMHO there is no way to do this in a bullet-proof fashion.
The best bet is to minimize the impact of a failover.
I.e. any updates would be sent on a best effort basis,
without any attempts for guaranteed delivery.
Some connections will still get hosed, but you can
minimize the number.

The speed of lvs makes any other approach impractical.

This would seem like the best approach for my particular problem. My compute cluster does not deal with a high connection rate, but with fewer long term connections. A best effort (possibly just UDP type packets) transmission of the connections to the failover director would probably be sufficient. The idea would be to lose as few connections as possible, I think that losing one or two connections in the event of a failure would be brilliant. In a situation where we lost the director and one node, losing only that nodes connections and possibly one or two other connections would be a massive improvement over losing the entire clusters connections.

>Essentially what is needed is for information about the affinity for a
>connection for a real server to be communicated to the standby. The trick
>is that this has to be done without impacting on performance.

Yep, that pretty much sums up what I was looking for....



/sG

_______________________________________________
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://www.in-addr.de/mailman/listinfo/lvs-users

Steve.

----------------------------------------------------------------------------
Going to church doesn't make you a Christian any more than going to a garage
makes you a mechanic.



<Prev in Thread] Current Thread [Next in Thread>