Hello all.
I'm designing a server that runs on a cluster. I'm
intending to use LVS as the initial low cost solution (gotta say it looks wicked
:).
There is one thing that would make my server much
more efficient: the ability for a real server to tell the director to swap a
client from one real server to another when using connection
persistance.
If I can do this I will be able to reduce traffic
between the real servers by directing clients towards the node that contains the
cached data they are interested in.
Normally this would seem overkill but the cached
data is read/write so I would like to reduce locking between nodes.
Because of the way the cache is segmented, as soon
as the client is redirected to the correct node it will not have to leave the
node to read/write all its required data and all the locks will stay in
process.
Ideally I would like to do this mid connection but
I understand this is not possible (the TCP/IP session state on the
redirected node wouldn't exist).
The other option was to have the client connect to
the preferred node the next time.
From reading the docs it looks like this could be
implemented by making an API that updated the hash table used to store the
persistent connection mapping.
A Client lib that talked to the director to issue
basic commands could be developed. I (and anyone developing cluster stuff) could
then link into the library and manage the load a bit based on some application
specific logic. The Client lib could be developed over time to handle things
like down/uptime notification for a specific port(s), load notification (to use
in scheduler) any other ideas people can think of.
Another alternative might be to use SNMP to control
something like this.
I believe some of the high end loadbalacing systems
have a way to do this...
I would be interested in any suggestions on
incorporating server application logic into the load balancer.
Mark
|