Hi all,
(sorry last post was in html by mistake)
I've been scratching my head a bit with a performance issue on a live
system and was wondering if anyone could point me in the right
direction for resolving?
I have a load balancer using 2.4.32 stock kernel running LVS with HTTP
& HTTPS in DR mode with 10 real servers each.
The load balancer is a 2.4GHz Celeron with 512Mb RAM, broadcom gigabit nics.
free -m shows 300MB used + 200MB cached (no swap file)
When the active connections climb to 60,000 and throughput is about
100MB+ we get a sudden and massive drop of in throughput...
We get a corresponding drop in system CPU and increase in Kernel CPU...
I have a selection of graphs here including throughput and cpu usage etc.
http://www.loadbalancer.org/test/
No errors in syslog relating to NICs or netfilter etc.
The actual number of connections is actually about 9K or 11K/sec
(Direct routing exagerating them), I was looking at using ipvsadm
--set to reduce the TCP timeout and therefore connection table size
but I don't think that the memory is an issue.
Any other ideas?
NB. I've tested similar hardware to 800MB+ and 50K cons/sec (obviously
with small/large packet sizes though not live data)
Many thanks in advance for any insight :-).
--
Regards,
Malcolm Turnbull.
Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|