LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: active-active only works with kernel 2.4.26?

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: active-active only works with kernel 2.4.26?
Cc: Horms <horms@xxxxxxxxxxxx>
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Mon, 13 Nov 2006 12:03:21 +0100
It is only active-active for the linux-directors, and its not really
supposed to be active-active for a given connection, just for a given
virtual service. So different connections for the same virtual service
may be handled by different linux-directors.
I've read it now and I must say that you've pulled a nice trick :). I
can envision that this technique works very well in the range of 1-2
Gbit/s for up to 4 or so directors. For higher throughput netfilter
and the time delta between saru updating the quorum and the effective
rule being in place synchronised on all nodes might exceed the packet
arrival interval. We/I could do a calculation if you're interested,
based on packet size and arrival on n-Gbit/s switched network.

I'm not sure how far it would scale, but something like what you
mention above is what I had in mind at the time.

I wonder if keepalived's FSM could have been used in conjunction with VMACs and saru? OTOH keepalived operates on the premise of 1:1 mapping between VRID/VSR and IPVS service.

The real trick, is that it isn't a trick at all. LVS doesn't terminate
connections, it just forwards packets like a router. So it needs to
Yep.

know very little about the state of TCP (or other) connections. In fact,
all it really needs to know is already handled by the ipvs_sync code,
and that mainly just a matter of association connections with real
servers. Or in other words, the tuple enduser:port,virtual:port,real:port.
This is true, however you're setting rules for ESTABLISHED in your
code to accept packets by lookup of the netfilter connection tracking
and while the kernel 2.4 does not care much about window size and
other TCP related settings, 2.6 will simply drop the in-flight TCP
connection that is suddenly sent to a new host. There are two
solutions to overcome this problem for 2.6 kernel. One is fiddling
ip_conntrack_tcp_be_liberal, ip_conntrack_tcp_loose and sometimes
ip_conntrack_tcp_max_retrans and the other is checking out Pablo's
work on netfilter connection tracking synchronisation.

That makes sense. To be honest I haven't looked into what diffuculties
2.6 would pose. In any case, using netfilter (and heartbeat for that
matter) was really just a convenience. It may be better to implement
things an entirely different way.

MPLS or NETLABEL look promising, but I don't know how well this works in a switched environment.

If you want I can send you some updates to your document. I like your idea very much, especially the forged MAC reply :). Dirty engineering at its best.

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>