LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: active connections not correct

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: active connections not correct
Cc: Horms <horms@xxxxxxxxxxxx>
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Fri, 15 Sep 2006 09:00:16 +0200
I'd rather not switch to LVS-NAT, simply because of the amount of
connections I'm going to be handling.  I am expecting ~1000 ssh
sessions and 300-500 telnet sessions and I am worried that the load on
the director would suffer.

In what time-frame? If it's 1500 sessions per ms it's going to get tight for IPVS :). If those are the maximum amount of concurrent sessions at any time, this is absolutely no issue at all. If you get to the 10000s of concurrent sessions we'll pick up that discussion again, ok?

See, IPVS basically only translates well cached parts of frames (packets for LVS_NAT) on their way through the node. Nowadays this can be done almost at wire-speed. The only thing you might have to worry about is the connection tracking of netfilter. This of course has to be disabled in an environment with 10000s of concurrent connections.

All this is in the mailing list archives as well as a long discussion about the tcp state timer settings.

Maybe my understanding of LVS-DR is incorrect.  I understand that
return packets that are sent from the RS to the client are not sent
through the director.  However, would the reply packets from the
client go through the director in order to get to the RS?   That was
my understanding of LVS-DR, so I thought that the ipvsadm table should
continue to be updated.

An idea Horms and I have discussed years ago is that using the direct return patch (forward shared) and improving the state handling code, could indeed handle LVS_DR timeouts correctly with regard to sessions. I'm talking about set_tcp_state() callers and semantics:

http://tinyurl.com/qcty3

Or here with some colours and indexed (but old source):

http://www.drugphish.ch/~ratz/IPVS/ip__vs__proto__tcp_8c.html#a35

Load balancing nowadays should not be done anymore on L2 but on L3 using VRRP for (stateful) failover :). Who wants to fiddle with arp settings anyway?

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>