Ty,
I can't answer most of your questions, (*) but can perhaps help on
speaking up for the reliability part of the LVS director code. We're
currently shipping about 36 million HTTP requests per day through a bunch
of cache boxes load balanced by 2 LVS directors. Since we've been running
these systems, we've experienced zero downtime due to the LVS director
code.
Put it this way - if you're planning to load balance a system that will
need to balance a billion or more HTTP requests per month, LVS is
*definitely* up to the job.
I'd compare the stability of the unmodified LVS code to that of a
router/hub based on this.
(*) I think you're under a mis-apprehension though - the number set in the
kernel config of the table size beinga power of 2 isn't a limit on the
numbers of concurrent connections, it relates to a hash table, which I
presume has overflow buckets, because during our test phase the number
of concurrent connections through the director exceeded this - which
presumably meant overflow buckets got used. (We're now running with a
much larger table size :-)
I may of course be *completely*, and utterly wrong here, since I
didn't write the code, and have only passed a highly curious eye over
it to see how it could be extended in other ways :-)
I suspect some of the the changes you're requesting are possible and would
be useful in many contexts. (eg during failover scenarios)
Michael.
--
National & Local Web Cache Support R: G117
Manchester Computing T: 0161 275 7195
University of Manchester F: 0161 275 6040
Manchester UK M13 9PL M: Michael.Sparks@xxxxxxxxxxxxxxx
----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx
|