LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [PATCH] Invalidate expired persistance templates

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [PATCH] Invalidate expired persistance templates
Cc: ja@xxxxxx
Cc: Horms <horms@xxxxxxxxxxxx>
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Fri, 11 Nov 2005 09:11:48 +0100
ADDENDUM:

allow timer handler for same cp to run on many CPUs. There is no
reason to keep it in this way, especially now when we don't like
cp->timeout to be 0. Then ip_vs_conn_expire() for 2.4 will use
cp->timeout in the same way as 2.6. We must be sure cp->timeout is
not set to 0 for some reason, by checking code in 2.4 and 2.6.
What about resetting the timer to (tcp timeout - persistency template timeout)?

Actually, come to think about it, it does not even make sense to allow a user to set a persistency template lower than the tcp fin_wait timeout. Maybe solving the issue from this end would require no kernel patching at all.

While we're at it, I really long to discuss the fin_wait setting which IMHO is just too high for LVS_DR. We've got a customer with a similar (from the core business logic) setup to ebay.com, but with higher demands on hype situations, were we get hit by 100'000 ADSL customers at once. Since the application is a bit unfortunate regarding session handling, we have a lot of sockets in TIME_WAIT/FIN_WAIT states and even with tcp_tw_recycle and tcp_tw_reuse we couldn't handle this load until we sharply cut down the IVPS tcp state transition timeouts. I would love to have a ipvs_hype_timings proc-fs with a kernel-internal additional (to the existing normal and tcp_defense state transition table) transition table. LVS_DR just isn't very optimal with regard to hypes :).

Regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>