Roberto Nibali <ratz@xxxxxx> wrote:
> Hi guys,
>
> I was wondering while polishing my threshold limitation and server pool
> patch, why the WLC and SED (slightely improved WLC) schedulers do not
> use the ip_vs_estimator list which is running too all the time to better
> adjust their ip_vs_*_dest_overhead() methods? Currently we have
> following situation:
>
> WLC
> ---
> static inline unsigned int
> ip_vs_wlc_dest_overhead(struct ip_vs_dest *dest)
> {
> /*
> * We think the overhead of processing active connections is 256
> * times higher than that of inactive connections in average. (This
> * 256 times might not be accurate, we will change it later) We
> * use the following formula to estimate the overhead now:
> * dest->activeconns*256 + dest->inactconns
> */
> return (atomic_read(&dest->activeconns) << 8) +
> atomic_read(&dest->inactconns);
> }
>
>
> SED
> ---
> * The SED algorithm attempts to minimize each job's expected delay until
> * completion. The expected delay that the job will experience is
> * (Ci + 1) / Ui if sent to the ith server, in which Ci is the number of
> * jobs on the the ith server and Ui is the fixed service rate (weight) of
> * the ith server. The SED algorithm adopts a greedy policy that each does
> * what is in its own best interest, i.e. to join the queue which would
> * minimize its expected delay of completion.
>
> static inline unsigned int
> ip_vs_sed_dest_overhead(struct ip_vs_dest *dest)
> {
> /*
> * We only use the active connection number in the cost
> * calculation here.
> */
> return atomic_read(&dest->activeconns) + 1;
> }
>
> I wonder if adding the stats of the rate estimator would help fighting
> load imbalance more sharply? OTOH load imbalance is can be balanced out
> too quickly resulting in resonance catastrophy :).
>
> Just wondering and pondering ...
I guess the best thing would be to play with it and find out.
--
Horms
|