LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: max connections to load balancer

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: max connections to load balancer
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Sun, 22 Jun 2003 22:31:35 +0200
Hello,

So it's a RS limitation. Maybe I didn't read your email carefully enough but what is the average time to fetch _one_ page and how _big_ is it in bytes? Also what is the load on the RS during the test?

I would still like to get those numbers, pretty please.

It is just a file called ab.html and all it has in it is the work test

So it's static content.

What is also funny is that you have a limit on exactly 500. Bugs or limitations normally don't tend to show up with such an even number ;).
Well I have seen it just over 500, but not many times.

Ok.

I tried using the attached perl script as well and I got the following

from the script :

maxed out at 2288: Operation now in progress

From ipvsadm

IP Virtual Server version 1.0.7 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.55.5:http rr
-> 192.168.55.3:http Route 1 1145 1 -> 192.168.55.1:http Route 1 1145 0

So the server simply can't handle more of your requests. Again, it would be interesting to get some more information on from the RS while on load.

I know, we use thttp for static contents too sometimes because it can handle more connections than apache, but it should be able to get a lot more. I wonder if you set a connection limitation somewhere, something along the throttling part of thttpd. Also check your LINGER_TIME and LISTEN_BACKLOG settings.

They are not in the config file, I am using the standard debian woody
package, should I be compiling this app?

No, those parameters can't make such a big difference. They would be in the config.h accompanging the package.

However I do get this from both of the app servers

TCP: time wait bucket table overflow

Too many connections in TW for FW2 state and too little memory too keep sockets. Very interesting!! From your 15k TW state entries and the 128Mb RAM assumption it would still not make too much sense because a socket doesn't need 8500 bytes. I think after the next email I have some tunables for you :)

We'll fiddle with some /proc/sys/net/ipv4/ entries.

With your perl script, do you still get the time wait bucket table overflow?

Thanks again man.

No worries, but I hope this time I get the numbers.

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc

<Prev in Thread] Current Thread [Next in Thread>