LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Performance testing

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: Performance testing
From: Roberto Nibali <ratz@xxxxxx>
Date: Tue, 10 Sep 2002 11:46:52 +0200
Hi,

Brian Jackson wrote:
Just as a little experiment, since I have enjoyed my preemptible/low latency patches, I decided to test my lvs cluster with the patches. The results were interesting.

Preemptible kernels don't buy you anything on a server, it's simply speaking for a desktop machine where you'd like to listen to mp3 (non-skipping of course) and compile the kernel. Low latency interferes with the concept of preemptible kernels in that of the network stack runs in softirqs and get's worked around by the kernel scheduler. If your driver generates a lot of IRQs for RX buffer dehooking, the scheduler must be invoked to get those packets pushed in the TCP stack or you loose packets. As long as you don't run X and some number crunching software on the RS preemtible kernels hurt TCP/IP stack performance IMHO.

1 director
2 realservers
1 client
all fast eth
3com dual speed hub(I am trying to get ahold of a switch to test with)
I used Julian's LVS testing utilities
LVS without Prempt patch
AVG. packets/sec
RS1        RS2
5359        5358
LVS with Preempt and lock-break patches from Robert Love
Avg. packets/sec
RS1        RS2
4677        4675

If you do tests, you need to give the community more information about:
o exact kernel (vanilla + all patches, .config)
o machine: CPU, mainboard, RAM, PCI-slot speed
o network topology and involved HW
o /proc/sys/net/ipv4/* settings
o /proc/sys/net/ipv4/vs/* settings

I didn't know if anybody would be interested in these numbers.

People certainly are interested in performance wise evaluations of LVS but with a more detailed test bed. As Julian suggested you could try NAPI or TSO (kernel 2.5.x and the right NIC driver)

Let me what you think or if you have any questions.
--Brian Jackson

Thank you for the numbers. Interesting to see would also be:

o how much free idle and CPU time the machines have under load test
o cat /proc/slabinfo
o vmstat
o ratio of 'TX/RX packet' rate ---> lost packets in %

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc



<Prev in Thread] Current Thread [Next in Thread>