LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: load balancing trouble at a high load

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: load balancing trouble at a high load
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Sun, 02 Jul 2006 21:19:37 +0200
Hello,

The default of /proc/sys/net/ipv4/ip_local_port_range is "32768 61000".
In short, 61000 - 32768 = 28232.

Which has nothing to do with IPVS, normally. ip_local_port_range is for local sockets. IPVS does not do sockets. But let's check the rest of the email ...

The number of  client of our test environment is one.

Ok.

The hash key of ip_vs_conn_tab (connection table) is based on
protocol, s_addr(caddr), s_port(cport), d_addr(vaddr), and d_port(vport).

Correct.

I've corrected your statement a bit:
So I think that the max amount of hash values produced by hash function
is 28232(default) for one client to same virtual server.

Yes, this makes sense.

Therefore, I think the limit of ActConn + InActConn for every client at a high load exists and the number of hash key for ip_vs_conn_tab
from same client to same virtual server (to a realserver) is full.

I don't follow you here anymore, I'm sorry. Where does this relation between act + inact connections to the hash table come from in the code?

So I think that strange behavior at a high load was occured by
the above reason.

I have to go back and read the whole thread but this month I'm unable to do so.

In short, the cause of the load balancing trouble at a high load is mainly
related to ip_vs_conn table managed by hash key based on the above elements
and the limit of port range of a client

Interesting observation, although right now I don't see the connection between the hash table and the amount of active and inactive connections. If your observation is correct, would you be able to perform following test conduct for me and report back, please?

set ip_local_port_range to 10000-10100 and repeat your tests.

according to your statement it the maximum connection channels should be topped at around 100.

But I think that this specification of ip_vs is no problem in real environment.

I believe that if such a deficiency exists, it will very well be a problem. We still deploy 2.2.x kernel based systems where the local port range was set from 1024 to 5000.

More questions below.

------------------------------------------------------------------------
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.101:http rr
-> rs02:http Masq 1 0 1 -> rs01:http Masq 1 1 28229

How slow is this machine? Did you fiddle around with TCP-related settings on the client?

------------------------------------------------------------------------

Do you have netfilter modules loaded? Please don't do performance tests with any netfilter code unless you want to test netfilter. What does your /proc/net/ip_conntrack say in this situation?

Regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' |

<Prev in Thread] Current Thread [Next in Thread>