LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Maxiumn Connections and Overflow

To: <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Maxiumn Connections and Overflow
From: "Ty Beede" <tybeede@xxxxxxxxxxxxx>
Date: Wed, 23 Feb 2000 17:15:34 -0800
    This is a hack to the ip_vs_wlc.c schedualing algorithm.  It is curently implemnted in a quick, ad hoc fashion.  It's purpose is to support limiting the total number of connections to a real server.  Currently it is implmented using the weigh value as the upper limit on the number of activeconns(connections in an established TCP state).  This is a very simple implementation and only took a few minutes after reading through the source. I would like, however, to develop it further. 
    Due to it's simple nature it will not function in several types of enviroments, those baised on connectionless protocals(UDP, this uses the inactconns variable to keep track of things, simply change the activeconns varible-in the weigh check- to inactconns for UDP) and it may impose complecations when persistance is implemented.  The current algorimthm simply checks that weight > activeconns before including a sever in the standard wlc scheduling.  This works for my enviroment, but could be changed to perhaps (weight * 50) > (activeconns * 50) + inactconns to include the inactconns but make the activeconns more important in the decison.
    Currently the greatest weight value a user may specify is approimalty 65000, independant of this modification.  As long as the user keeps most importanly the weight values correct for the total number of connections and in porportion to one another the things should function as expected.
    In the event that the cluster is full, all real severs have maxed out, then it might be nessisary for overflow control, or the client's end will hang. I haven't tested this idea but it could simply be implemented by specifing the over flow server last, after the real severs using the ipvsadm tool.  This will work because as each real server is added using ipvsadm it is put on a list, with the last one added being last on the list.  The scheduling algorithm traverses this list linearly from start to finish and if it finds that all severs are maxed out, then the last one will be the overflow and that will be the only one to send traffic to.
    Anyway this is just a little hack, read the code and it should make sence. It has been included as an attachment.  If you would like to test this simply replace the old ip_vs_wlc.c scheduling file in /usr/src/linux/net/ipv4 with this one. Compile it in and set the weight on the real severs to the max number of connections in an established TCP state or modifiy the source to your liking.
 
 

Attachment: ip_vs_wlc.c
Description: Binary data





<Prev in Thread] Current Thread [Next in Thread>
  • Maxiumn Connections and Overflow, Ty Beede <=