LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: max connections to load balancer

To: "LinuxVirtualServer.org " "users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: max connections to load balancer
From: Robert Lazzurs <lazzurs@xxxxxxxxxxx>
Date: 19 Jun 2003 13:36:50 +0100
On Thu, 2003-06-19 at 12:50, Roberto Nibali wrote:
> Hello,
> > The count was from ab however I checked when doing the same tests and it
> > was 500 per real server.
> 
> So it's a RS limitation. Maybe I didn't read your email carefully enough but 
> what is the average time to fetch _one_ page and how _big_ is it in bytes? 
> Also 
> what is the load on the RS during the test?



> > and I was getting this from ipvsadm
> > 
> > TCP  192.168.55.5:http rr
> >   -> 192.168.55.1:http            Route   1      500        14699     
> >   -> 192.168.55.3:http            Route   1      500        15404     
> 
> Eek, your RS are ill. Sockets are not closed anymore there. I'm very much 
> interested in the page you're trying to fetch now.

It is just a file called ab.html and all it has in it is the work test

> >>test ----> LVS ----> RS
> >>test --------------> RS
> >  
> > Yea, that is what I have been doing, sorry for not explaining it
> > clearly.
> 
> Thanks for confirmation; so this and the inactive counters from above to me 
> indicate that your RS application does not close the sockets properly. We'll 
> have to do some more testing then, once I get some more information about the 
> page and its size.
> 
> What is also funny is that you have a limit on exactly 500. Bugs or 
> limitations 
> normally don't tend to show up with such an even number ;).

Well I have seen it just over 500, but not many times.

> > How would I be able to check if this is the case and how would I be able
> > to solve it?
> 
> You could run testlvs [1] but I can derive some numbers as soon as I know the 
> page size and the RTT for one GET.

I tried using the attached perl script as well and I got the following

from the script :

maxed out at 2288: Operation now in progress

>From ipvsadm

IP Virtual Server version 1.0.7 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.55.5:http rr
  -> 192.168.55.3:http            Route   1      1145       1         
  -> 192.168.55.1:http            Route   1      1145       0         

> > Well it was able to take more than apache and I tried setting that to
> > take the most connections it could.  Do you have any better suggestions
> > on software I should be using client side, even another protocol.
> 
> I know, we use thttp for static contents too sometimes because it can handle 
> more connections than apache, but it should be able to get a lot more. I 
> wonder 
> if you set a connection limitation somewhere, something along the throttling 
> part of thttpd. Also check your LINGER_TIME and LISTEN_BACKLOG settings.

They are not in the config file, I am using the standard debian woody
package, should I be compiling this app?

> > I know it should be able to handle more but it appears there is
> > something wrong with my tests.
> 
> Or the app.

Agreed.

> > However I do get this from both of the app servers
> > 
> > TCP: time wait bucket table overflow
> 
> Too many connections in TW for FW2 state and too little memory too keep 
> sockets. 
> Very interesting!! From your 15k TW state entries and the 128Mb RAM 
> assumption 
> it would still not make too much sense because a socket doesn't need 8500 
> bytes. 
> I think after the next email I have some tunables for you :)
> 
> We'll fiddle with some /proc/sys/net/ipv4/ entries.

Thanks again man.

Take care - RL

-- 
MSN:lazzurs@xxxxxxxxxxxxxx      |"All that is etched in stone
Yahoo:admroblaz AIM:admroblaz   |is truly only scribbled in
ICQ:66324927                    |sand" - RL
Jabber:admroblaz@xxxxxxxxxx     |Join Eff http://www.eff.org
e-mail:lazzurs@xxxxxxxxxxxxxxxxx|Take care all - Rob Laz


Attachment: max-connections.pl
Description: Text Data

<Prev in Thread] Current Thread [Next in Thread>