Hi,
I recently posted my experience with persistence and the strange
behaviour
of lvs to assign new connections to a 'busy' server although there might
have been servers with no connection at all.
Maybe this is not a bug but rather a timing problem. I noticed an
optimal
sharing of all rservers if all connections came in shortly. Refering to
this I have some questions:
If (the only) connection timed out (TCP_WAIT==0), the server is handled
as 'free' and may get the next incoming request, even if there exists an
(active) persistence-template (involving this server)?
The point is, if the expired client decides to go on, it will get the
same
rserver as before for persistance reasons. This rserver is overloaded in
proportion to all others, because of the new connection AND the pers.
one.
I hope, its clear to see what I mean.
The better (or a safer) method in my case seems to be to not only count
connections but also take a look for existing persistance-templates and
possibly pick another real-server to avoid the scenery above.
This might not catch every situation (ie all rservers got pers.), I
know.
But I think its a better way to guarantee the sole usage of a server (in
case of #conns < #server).
We are not dealing with thousands of connections, but with a few
computationally intensive / expensive ones.
Could this be done or do you think its not feasable (maybe to slow...)
or
the whole idea is quite out of the question?
Greetings
holger
--
HIS GmbH Hannover Holger Kettler
kettler at his de 0xCBBE85FB
0511-1220-215
|