LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

RE: lvs bottlekneck

To: Dan <dan@xxxxxxxxxxx>
Subject: RE: lvs bottlekneck
Cc: "'Drew Streib '" <ds@xxxxxxxxxxx>, "'Cono D'Elia '" <conod@xxxxxxxx>, "'lvs-users@xxxxxxxxxxxxxxxxxxxxxx '" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
Date: Sat, 13 May 2000 06:51:12 +0300 (EEST)
        Hello Dan,

On Fri, 12 May 2000, Dan wrote:

>  I have already run into director bottlenecks under normal traffic. They are
> present because I'm using NAT, sustain a large number of connections, and
> create outbound connections for many of my inbound connections (read:
> proxy). The problem is that the Linux kernel only supports 4096 masquerading
> (NAT) connections outbound. Going beyond that brings the system to its
> knees. I had to modify the kernel & recompile to up this number. Also, in
> regard to sufficiency of memory - 64M is not enough for my app. I need to
> sustain 11000 simulataneous connections which amounts to a hash table of
> 2^19. That's 68M of RAM just for the hash table. I realize that many apps
> are not this severe, but they are points to keep in mind when you start
> pushing past the base 2^12 table size.

        We already discussed your problem. But I still don't
understand how do you reach this limit of 4096 entries. This
limit is the number of connections to one external service (remote
address and port). The current limit for the normal MASQ
connections is 40960 (per proto) and can be increased by tuning
PORT_MASQ_MUL and IP_MASQ_TAB_SIZE. I still don't understand
how is possible your limit to be 4096. In your first report the
entries was 40960-36214 free => 4746. That means you have
4096 connections from the MASQ box to the some external service
and 650 to other services. May be your proxy servers access
another external proxy server?

        Don't talk so easy for the MASQ limits :) There are
users with more than 4096 entries.

Regards

--
Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>



<Prev in Thread] Current Thread [Next in Thread>