The problem has been solved. It's something related to iptables.
As expected, it's netfilter and the connection tracking. If you want
high performance load balancing, do _not_ use netfilter; especially the
connection tracking. It just does not scale. Simply loading ip_conntrack
into the kernel makes your packet rate drop by 60 kpps on a 1Gbit/s
connection.
What I wonder is this: if I use conntracking in a DR setup in teh
INPUT and OUTPUT chains *only*, would this affect the ipvs
performance adverseley?
ip_conntrack cannot be limited to INPUT or OUTPUT only, since it's a per
socket buffer callback. It even gets faster when you completely disable
CONFIG_NF_CONNTRACK, because you then even shrink the sk_buff structure
by 4 bytes, having an improved chance to fit it into a cache line.
Converting to non-conntracking iptables
rules would be nearly impossible or at least a hughe PITA,
Why? IPVS sneaks away packets destined for its service, so no conntrack
search is involved anyway, however if you load ip_conntrack you get nice
callback per skb (netfilter conntrack re-assembly pointer).
so I'd
rather not drop conntracking for the *local* connections to the
director. I do not need packefilterting for the balanced
connections.
For local connections you only need
Incoming ACCEPT TCP SYN
Outgoing ACCEPT TCP !SYN
Optionally a LOG rule
Dead easy :).
If I understand this:
http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.filter_rules.html
correctly, iptables conntracking wouldn't affect balanced packets
anyway, so it shouldn't affect performance, right.
Nope :). The connection tracking records all skb's it can get it's hands
on, it's like an octopus.
I'd be glad if someone could shed a bit of light on this.
Fiat lux,
Roberto Nibali, ratz
--
echo
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
|