I just saw a patch for the proper timeout set at the backup for the
received connections, well I want to share my opinion, since I have been
messing with the very same code last several days.
I think setting the timeout to 3 minutes for all received connections
has a very good reason.
IMHO setting a timeout of [IP_VS_TCP_S_ESTABLISHED] = 15*60*HZ is wrong
since AFAIK there is no way for the master to inform the backup if a
connection is closed or fin_wait or whatever. Connection sending is
based on packet count, isn't it? So imagine A TCP connection lasting 3
seconds which is going to hang on the backup for 15 more minutes. Now
imagine 1000 connections lasting several seconds on the master hanging
for 15 minutes on the backup. I think this timeout should be kept
reasonably low to keep minimal number of hanging connections and
reasonably high not to timeout until next update.
However if the backup takes over it will set the proper timeouts as
defined in "static int xxx_timeouts[IP_VS_XXX_S_LAST+1]" for all the
Well I might be wrong, but I just wanted point the attention of the
people who know how everything works to this potential problem :)