Roberto Nibali <ratz@xxxxxxxxxxxx> wrote:
> ADDENDUM:
>
>>> allow timer handler for same cp to run on many CPUs. There is no
>>> reason to keep it in this way, especially now when we don't like
>>> cp->timeout to be 0. Then ip_vs_conn_expire() for 2.4 will use
>>> cp->timeout in the same way as 2.6. We must be sure cp->timeout is
>>> not set to 0 for some reason, by checking code in 2.4 and 2.6.
>>
>> What about resetting the timer to (tcp timeout - persistency template
>> timeout)?
>
> Actually, come to think about it, it does not even make sense to allow a
> user to set a persistency template lower than the tcp fin_wait timeout.
> Maybe solving the issue from this end would require no kernel patching
> at all.
>
> While we're at it, I really long to discuss the fin_wait setting which
> IMHO is just too high for LVS_DR. We've got a customer with a similar
> (from the core business logic) setup to ebay.com, but with higher
> demands on hype situations, were we get hit by 100'000 ADSL customers at
> once. Since the application is a bit unfortunate regarding session
> handling, we have a lot of sockets in TIME_WAIT/FIN_WAIT states and even
> with tcp_tw_recycle and tcp_tw_reuse we couldn't handle this load until
> we sharply cut down the IVPS tcp state transition timeouts. I would love
> to have a ipvs_hype_timings proc-fs with a kernel-internal additional
> (to the existing normal and tcp_defense state transition table)
> transition table. LVS_DR just isn't very optimal with regard to hypes :).
As I understand it, there is code in the kernel (or at least there used
to be) to allow all the timouts to be configured, but the user-space
half was never implemented. It would be good to rectify this situation.
--
Horms
|