LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Kernel configurations

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Kernel configurations
From: New User <jfjm2002@xxxxxxxxx>
Date: Fri, 3 Nov 2006 06:23:52 -0800 (PST)
Hi,

I am new to LVS, I have spent numerous days pouring over the various 
documentations and mailing list. I have mostly decided to deploy HA LVS-NAT on 
CentOS (but with a fresh kernel 2.6.18.1) with keepalived. These will be 
dedicated machines to load balancing with 2G RAM.

I am in the process of building out the system, but my usage is more focused on 
maximizing number of connections than maximum throughput in terms of Mb/s or 
Gb/s. The connection rate is hundreds of thousand connections per second (even 
million+). The connections will be both TCP and UDP. TCP connections are very 
short lived, and the requests and responses are very small, mostly no larger 
than the TCP and UDP header.

I would like to know the optimal kernel configurations for my setup:

Hash table size - The document offers conflict recommendations. It first 
emphasis NOT to change it and leave it at 12, which is 4096 buckets. But it 
goes on explaining a scenario where there are 200 connections/second with each 
connection maintaining 200 seconds, the hash table size should be 15. If this 
implementation is similar to Netfilter's conn_track, then there should be 
benefits in increasing the number of buckets. Statistically speaking, this can 
reduce number of traverse in the linked list within the bucket. Assuming 
everything is even distributed, then double the number of buckets, it should 
half the number of entries in the linked list. Of course, this comes at an 
expense of extra memory requirement when connection number is low. So is there 
any other negative impacts (especially at performance) in increasing this 
number?

Preemptive option - Although this may not be directly related to LVS, but with 
2.6 kernel, one can set preemptive option. Does one gain/loss performance when 
the machine is dedicated to LVS, which is only network I/O intensive?

SMP - It is also recommended not to use SMP because of spinlocks. However, 
given most of machines are multi-core, and large network traffic with many 
small packets also causes a lot of interrupts. In this situtation, is it still 
better not to use SMP?

I appreciate any recommendations in advance.






<Prev in Thread] Current Thread [Next in Thread>