Just a thought,
Concerning a fastest response class scheduling algorithm which would
function to balance response time through out the cluster by manipulating the
number and type of connection to each node of the cluster, is and how would it
be possible to develop an algorithm that works by evaluating the rate of change
for the sum of the client tcp windows? My knowledge is not exact but would it
be possible to this? Realistically how fast does the environment between a
client and server on the internet change? Assuming a tcp protocol like ftp
which for now we assume is primarily performing bulk data transactions and thus
over a significant period of time it would act to continuously fill up the
window size. Therefor this would result in a number of connections to a single
node having filled windows or filling windows most of the time. Now the rate at
which the windows are/or become filled would be dependent on three factors: the
size of the physical links over the length of the connection, the current
congestion over those links, and also the current congestion at the end server
node(the node's ability to process data). Now if the sum of the rate of change
of these links is averaged a load value may be calculated for each node and as a
result connections maybe intelligently balanced. This should also apply to data
exchange types which do not resemble bulk ftp. I also wonder if it would be
possible to create a framework within the lvs to enable balancing at a packet
granularity rather than at a connection granularity? Would it be possible for
the lvs to record the necessary state information to manipulate acks in such a
manner that it could pass off connections between identical servers while the
connection always appeared to be "up" to the client? Seems like all this would
require lots of packet mugging so yeah...
Thanks, Tyrel
|