Hello,
On Tue, 10 Sep 2002, Matthias Krauss wrote:
> In addition i wrote a little failover app. running on a independant host,
> which telnets to the Director and removes or set the weight to "0"
> if one of the RIP is in fail.
Note that setting 0 as RS weight is assumed as "stopped
temporary". The existing connections continue to work. It is
assumed that the RS weight is set to 0 some time before deleting
the RS. By this way we give time for all connections/sessions
to terminate gracefully. Sometimes weight 0 can be used from
health checks as a step before deleting the RS. Such two-step
real server shutdown can avoid temporary unavailability of the
real server. Graceful stop. At least, the health checks can
choose whether to stop the RS before deleting it.
If the RS is deleted the traffic for existing conns is
stopped (and if expire_nodest_conn sysctl var is set the conn entries
are even deleted). Of course, if for some connections we don't see packets
these conns can remain in the table until their timer is expired.
For non-persistent virtual servers the new conns are always
scheduled to "started" real servers. For the persistent VS the
handling is different, there is an issue with the current persistence
handling when RS weight is set to 0: new connections can be scheduled
to "stopped" RS if "affinity" for this client exists. It is good for
setups that prefer to serve even new connections from client that
finishes its session (multiple connections) to the stopped RS. It is
not good for setups that expect the traffic to stop. Bad clients
can decide never to stop their traffic and to keep the conns open.
So, sometimes it can be a bad idea the health checks to
play only with weight 0, sometimes it is preferred the RS to be
deleted, usually when persistent virtual servers are used. Or
may be we can implement some control mechanism for such options.
Any ideas for tunable parameters?
If the RS is deleted then all new connections to persistent
VS are scheduled to new real server ignoring the previous "affinity"
for this "client".
> So far so good, if i set now a 3 hour pers. timeout and if a connection to
> eg RIP1 is established and if the RIP1 is in fail and if the RIP1 address
> are removed or weight to "0" then i see the RIP1 still in the IPVS
> connection
> list counting down its long expire timeout.
> Doing ipvsadm -C clears most of the entries but RIP1 still shows up
> with "state NONE" and its high expire time even if the host was removed.
>
> Is there a way to clear a particular host complete from the connection
> table or another workaround ? (hope i missed nothing in the howto's)
It is true that we don't have much/any control in the conn
expiration process. May be the devel IPVS 1.1.0 will solve this problem.
Currently, the only way to forget about all connections is to unload
the ipvs module. May be a new conn expiration method can allow the conn
entries to be deleted from any context.
> Many many thanks in advance
> Matthias
Regards
--
Julian Anastasov <ja@xxxxxx>
|