LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] LinuxWorld Expo

To: Aaron McKee <aaron@xxxxxxxxxxxxxx>, Wensong Zhang <wensong@xxxxxxxxxxxx>, Nick Christopher <nwc@xxxxxxx>, lvs <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>, John Goebel <jgoebel@xxxxxxxxxxx>
Subject: Re: [lvs-users] LinuxWorld Expo
From: John Goebel <jgoebel@xxxxxxxxxxxxxx>
Date: Tue, 15 Feb 2000 17:08:39 -0800
On Tue, Feb 15, 2000 at 07:12:52PM -0500, Horms wrote:
> On Mon, Feb 14, 2000 at 10:42:09AM -0800, Aaron McKee wrote:
> > Internally, we've achieved 20,000 connections/s using our routing
> > software. The configuration was 2 Celeron-based servers and 2
> > Celeron-based clients. Additionally, we were using a stock untuned Linux
> > 2.2.13 kernel. In our benchmark, the most significant bottleneck we
> > identified was the Linux kernel itself.
> > 
> > The individuals I spoke with at VA Linux claimed 50,000 connections/s.
> > If the above configuration setup refers to this value, it looks like 16
> > real servers were required to generate this traffic. As VA makes
> > top-notch excellent hardware, I suspect they were using higher end
> > servers than our small Celeron systems. The VA engineers also mentioned
> > that they were using a modified 2.2 or 2.3 kernel, with improved TCP/IP
> > performance characteristics.
> 
> FYI: The va cluster consisted of 16 real servers and a pair
> of Linux Directors in a hot-standby configuration. From memory,
> each server was a 550MHz PIII with 1Gb of RAM (I don't build 'em 
> I just use 'em :). The Linux Directors were connected to
> the ethernet switch on gigabit/s ports while the real servers
> were connected via 100Mbit/s ports. I'm not sure where
> the 50,000 connections/s figure came from as I wasn't at
> the show.
> 
> 

I was there and I don't remember this number. We did a really lame ab test and
exceeded the max sockets on the ipvs before really hitting the servers hard.
This machine is coming back in the near future. Maybe Horms and Wensong would
like to do some benchmarking. I know the GFS guys would like to know, as would
we all here.

We did benchmark the fibre channel under GFS to about 70-75 megs/sec for reads.
We had 16 nodes behind the ipvs with lots of memory (it's fun working at a
hardware vendor sometimes :) ). The fabric itself uses both a and b channel on
the jbods ( 8 of them ). It was a big machine, no doubt about it.

The 50,000 number sounds too high though. Let's get it back and we'll try to run
proper tests.

John 

-----------------------------------------------------------------
John Goebel                                     VA Linux Systems
jgoebel@xxxxxxxxxxx                             408-542-8621
Key fingerprint 71 C1 94 9D 84 75 A2 20  BA E5 1E 6C D9 AB 4E 07
-----------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>