LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Linux Director Reliability

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Linux Director Reliability
From: Lars Marowsky-Bree <lmb@xxxxxxx>
Date: Thu, 25 Sep 2003 16:45:59 +0200
On 2003-09-24T15:38:07,
   Horms <horms@xxxxxxxxxxxx> said:

> LVS to be. How often does the system go down. And when it does
> what is the cause - if it is caused by something unrelated
> to LVS like faulty hardware then you can skip that bit.

In my ISP & ISP consulting times, I had been using LVS on a few (I think
it started out with four times two or so) systems as far back as 1999
(with kernel 2.2 ;), and I have never had a LVS related outage. The
systems had all been rock solid even compared to other Linux systems;
LVS load-balancers mostly exercise rather well tested code paths, while
a normal system (with disk IO load) is far more likely to run into
races.

Since working for SuSE, I never had a LVS bug assigned to me.

Most of the systems are IA32, various kernels from 2.2 to 2.4. But I
know that even on S390(x) and i/p-Series, people are using LVS as
frontends, and I've not heard of a real bug from them either.

Ranges vary widely from 4 to 32 real servers. I don't have bandwidth
data. Other than from those very initital deployment in 1999, which saw
~1k - ~5k concurrent connections, which is probably laughable compared
to todays standards ;)

So I can conclude that LVS is most certainly one of the more solid
pieces of code I have ever encountered.

Good luck with your presentation!


Sincerely,
    Lars Marowsky-Brée <lmb@xxxxxxx>

-- 
High Availability & Clustering          ever tried. ever failed. no matter.
SuSE Labs                               try again. fail again. fail better.
Research & Development, SUSE LINUX AG           -- Samuel Beckett

<Prev in Thread] Current Thread [Next in Thread>