Hello,
we've been using lvs for some time with great success. we've recently
upgraded the software and we're using the quiescent feature. this works
out nicely in that servers weights are dropped to zero and reset to their
original weight automagically when they're available again. we also use
the fallback feature with a standard maintenance page.
Out of curiosity: What software are you using to do this?
the combination of these two features creates a small problem. when the
fallback server is added to any of the virtual services, it's weight is
one, and users see our maintenance page. however, when the real server(s)
return, the fallback server's weight is reduced to zero but some users
still get the fallback server. our persistence is set to 60, but i'm
If you have persistency then this is normal because despite the fact
that the overflow server (fallback server) is quiesced it will happily
receive and accept further connections for clients which had been
assigned to it back when its weight was not 0. As long as the clients
click on the reload button the template entry for them will be reset
again to 60 seconds.
fairly certain this isn't the issue since our fallback server sporadically
gets requests until i manually remove the fallback server from the virtual
service (for any number of days).
This however would be a bug. Just to make sure, you're saying that your
fallback server is being served despite the fact it is quiesced? Could
you provide us with a ipvsadm -L -n when everything is normal and
another output when all RS are quiesced, please?
is it possible to have the fallback server automatically removed when at
least one of the real servers' weight is reset to it's original weight?
Not in the current implementation of LVS as we do not have the notion of
a fallback server. But Horms and I are working on providing such code in
the near future.
What kernel would you be interested to have such a feature in? I do have
a different approach ready for 2.2.x already. It's called the hprio
scheduler and was announced roughly 3 weeks ago.
i haven't tested using quiescent=yes for real servers and quiescent=no for
the fallback, and i don't know if that's even a possibility.
I'm afraid I do not know what you're talking about here. If you use a
third party software to handle the logic then you should say so ;).
thoughts and opinions appreciated.
Of course there is always a dirty hack to make it work. I've also hacked
together a small patch for one of my load balancers where I needed
atomic failover/failback to the overflow server. By flushing the
template entries in the masquerading table (not possible from user space
I'm afraid) your problem can be solved too.
Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
|