New Scheduling Algorithm

To: "'lvs-users@xxxxxxxxxxxxxxxxxxxxxx'" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: New Scheduling Algorithm
From: Dusten Splan <Dusten@xxxxxxxxxxxxxxxxxx>
Date: Mon, 19 May 2003 11:14:08 -0700
SO here's the issue. We are doing 3500 requests per second to our web server
farm and we would like to increases this with out having to add web server
by balancing the load on the servers better.  Our farm is made up of 100
boxes with 6 different machine types: 
Dual P3 650 - 1GB
Dual P3 650 - .5GB
Dual P3 700 - 1GB
Dual P3 750 - 1GB
Dual P3 933 - 1GB
Dual P3 1133 - 1GB

This issue is that the .5GB machines handle a lot less then the higher end
ones. We have been playing with wrr and wlc both of which have to many short
comings.  For example wlc tends to hand out the requests in bursts, sending
10-20 requests to a single server before moving on to the next one and wrr
keeps giving new connections to overloaded servers, sometimes the number of
active connections can get to 800+ even with an adjusted weight.
So a suggestion for a new scheduling algorithm could be a sort of weighted
round robin least connection threshold. Basically combine the round robin
technique with a running average of active connections. If the weighting
calculation for a particular server is less than the average, it gets the
connection. Either way the average is updated. Otherwise the next one is
checked etc until one is found below the average. It would need a check to
detect a full circle scan to prevent race conditions but it would only occur
where there was a sudden drop in connections which is less of a problem than
the other way around.
I could probably alter one of the other methods pretty easy, I'm not sure of
all where I would have to change stuff to add a new one.

If anyone can point me in the right direction that would be of grate help.
I should state that I'm running Kernel 2.4.20 with LVS of 1.0.8. I really
don't want to run 2.5 because this is meant to be a production box.  

<Prev in Thread] Current Thread [Next in Thread>