>(b) An open connection requires a file descriptor.
You can tweak this in the linux kernel source and then recompiled it:
limits.h
#define NR_OPEN 2048
#define OPEN_MAX 512 /* # open files a process may have */
fs.h
#define INR_OPEN 2048 /* Initial setting for nfile rlimits */
Met vriendelijke groet,
Dennis Kruyt,
Managed Services
ZXFactory BV
-----Original Message-----
From: Edward Chee [mailto:edward_chee@xxxxxxxxx]
Sent: dinsdag 11 mei 2004 11:05
To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Fundamental performance limits of a linux based load balancer
/server system?
Hi there
I'm new to LVS and have been learning much about loadbalancers recently.
Been trying to figure out the fundamental performance hotspots in
buildling a load balanced farm of linux servers that can pust 1Gbps of
traffic.
I was wondering if those of you who are experienced might be able to
check my deductions:
======================
(1) Load balancer bottlenecks
======================
Assuming enough CPU+RAM is available, there are hard linux system limits
(a) max open connections is limited to the number of available ports
(65535-1024 divided by TIME_WAIT). For systems with TIME_WAIT set to
recommended 4 minutes, this limits max open connections to measly 268.
See http://support.zeus.com/doc/zlb/sizing_guide.pdf
(b) An open connection requires a file descriptor. Therefore, the max #
of open connections depends on the linux system wide limit for file
descriptors.
See http://support.zeus.com/faq/zlb/v1/entries/os/linuxfd.html.
The Zeus website claims to have linearly scalable performance simply by
adding more lateral balancers. However, they don't explicitly come out
and say what (a) and (b) imply about their product performance limits.
I'm trying to figure out realistic limits on the max number of open
connections as well as # connections per second a linux based balancer
can sustain.
If anyone has any experience tweaking OS params, I'd be very grateful to
learn from you.
======================
(2) Server bottlenecks
======================
Assume GB NICs are in use so that the NICs are no the bottleneck.
Assume also that "enough" CPU and RAM are used.
Assume all data is cached in memory so disks are not hit.
Even if all these were met, it would seem that 1 of these 2 bottlenecks
would be hit:-
(a) Linux max open connections due to available ports or file
descriptors on the web server.
-- We can address these just as in case #1 for the load balancer
(b) CPU overhead due to TCP processing for 1GB.
- This seems to be the only real advantage web io accelerators offered
by Netscaler and Redline Networks have. They aggregate multiple client
HTTP 1.0 connections into a single HTTP 1.1 persistent pipelined
connection to a webserver. This dramatically reduces the number of open
connections requirement on the web server -- it also effectively
multiplies the max number of open connections your webserver can handle
due to OS limits.
Secondly, it eliminates a large percentage of TCP connection
setup/breakdown overhead since the connections are persistent. This buys
you efficiency and CPU cycles.
See http://theboost.net/modem/accelerators.htm for more info. Netscaler
and Redline seem to go for between 30K and 90K.
So the natural question is : are there any linux based open source
products that possess this HTTP connection aggregation feature?
I've been trying to figure out if Squid reverse proxy does this but the
documentation is hard to filter through.
Has anyone tried implementing this for LVS?
Thanks a lot
ed
_______________________________________________
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://www.in-addr.de/mailman/listinfo/lvs-users
|