Hello Andreas,
You can't :) IP_VS_SO_GET_TIMEOUTS is not implemented in ipvsadm or I'm
blind, also the proc-fs related entries for this are not exported. I've
written a patch to re-instate the proper settings in proc-fs, however
only in 2.4.x kernels. Julian has recently proposed a very granular
timeout framework, however none of us has had the time nor impulse to
implement it. For our customers I needed the ability to instrument all
Does that mean we will get timeput parameters _per service_ instead of
global ones? Hooray! When can I test? :)
Julian proposed following framework:
http://www.ssi.bg/~ja/tmp/tt.txt
So if you want to test, the only thing you have to do is fire up your
editor of choice :). Ok, honestly, I don't know when this will be done
because it's quite some work and most us developers here are a pretty
busy with other daily activities. So unless there is a glaring issue
regarding timers implemented as-is, chances are slim that this gets
implemented. Of course I could fly down to Julian's place over the
week-end and we could implement it together; want to sponsor it? ;).
eh, I got it form /proc:
# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
Ahh, but that's not 2h + 9*75s :).
And yes, I am talking about Linux clients. I didn't for win yet.
[FYI: http://cryp.to/publications/masquerading-idle-connections/ to see
It's a bit an odd (and certainly old with regard to correctness and
usefulness) paper, since the authors (also) seem to neglect the
difference between a socket connection and a TCP stream. They do seem to
mix up a couple of things with TCP timers that have different semantical
meanings.
what I mean with the probes: "... whether a connection needs a
keep-alive packet to be sent.... " ] But it' dependent on the
application, if it usesd the feature, I has to be set when opening a socket.
There's a lot of TCP timers in the Linux kernel and they all have
different semantical meanings. There is the TCP timout timer for sockets
related to locally initiated connections, then there is a TCP timeout
for the connection tracking table, which on my desktop system for
example has following settings:
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_close:10
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_close_wait:60
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_established:432000
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_fin_wait:120
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_last_ack:30
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_syn_recv:60
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_syn_sent:120
/proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_time_wait:120
And of course we have the IPVS TCP settings, which look as follows (if
they weren't disabled in the core :)):
/proc/sys/net/ipv4/vs/tcp_timeout_established:900
/proc/sys/net/ipv4/vs/tcp_timeout_syn_sent:120
/proc/sys/net/ipv4/vs/tcp_timeout_syn_recv:60
/proc/sys/net/ipv4/vs/tcp_timeout_:900
[...]
unless you enabled tcp_defense, which changes those timers again. And
then of course we have other in-kernel timers, which influence those
timers mentioned above.
However, the beforementioned timers regarding packet filtering, NAPT and
load balancing and are meant as a means to map expected real TCP flow
timeouts. Since there is no socket (as in an endpoint) involved when
doing either netfilter or IPVS, you have to guess what the TCP flow
in-between (where you machine is "standing") is doing, so you can
continue to forward, rewrite, mangle, whatever, the flow, _without_
disturbing it. The timers are used for table mapping timeouts of TCP
states. If we didn't have them, mappings would stay in the kernel
forever and eventually we'd run out of memory. If we have them wrong, it
might occur that a connection is aborted prematurely by our host, for
example yielding those infamous ssh hangs when connecting through a
packet filter.
The tcp keepalive timer setting you've mentioned, on the other hand, is
per socket. And as such only has an influence on locally created or
terminated sockets. A quick socket(2) and socket(7) skimming reveil:
[socket(2) excerpt]
The communications protocols which implement a SOCK_STREAM
ensure that data is not lost or duplicated. If a piece of
data for which the peer protocol has buffer space cannot
be successfully transmitted within a reasonable length of
time, then the connection is considered to be dead. When
SO_KEEPALIVE is enabled on the socket the protocol checks
in a protocol-specific manner if the other end is still
alive.
[socket(7) excerpt]
These socket options can be set by using setsockopt(2) and
read with getsockopt(2) with the socket level set to
SOL_SOCKET for all sockets:
SO_KEEPALIVE
Enable sending of keep-alive messages on connec
tion-oriented sockets. Expects a integer boolean
flag.
I'm a bit missing this view from your cited reference. I did not read it
through thoroughly. My apologies for not being more specific, however I
don't have more time right now.
Best regards,
Roberto Nibali, ratz
--
echo
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
|