Hello,
On Tue, 11 Sep 2012, lvs@xxxxxxxxxx wrote:
> On Tue, 11 Sep 2012, Julian Anastasov wrote:
>
>
> >> Under CentOS 3 (traditional interrupts) with SMP affinity set to all cores
> >> (or rather half the cores for the external NIC and half for internal NIC)
> >> load scaled linearly until it fell off a cliff and load hit 100% and more
> >> generated traffic resulted in no more throughput (lots of Xoffs). I also
> >> have some old data showing NFCT improving performance on CentOS 3.
> >
> > So, keeping netfilter conntracks (conntrack=1) uses
> > less CPU cycles than creating conntracks with every
> > packet (conntrack=0). I hope you have large nf_conntrack_max
> > value for the conntrack=1 case.
>
> I should give you some more information about my directors. As well as
> being LVS directors they are doing firewalling with netfilter. I use
> netfilter marks to tell IPVS which connections to route to which pool.
> Thus netfilter will be tracking the state of every packet whether
> conntrack=0 or conntrack=1.
>
> I suspect that having IPVS keep netfilter up to date with what it is doing
> is helping it to find the state of the connection in its state tracking
> table quicker, thus there is less CPU load.
>
> I have nf_conntrack_max set to just over 16 million, though I rarely go
> over 2 million tracked connections (a good percentage are UDP). I also
> have all the netfilter and IPVS timeouts set to much lower values than the
> defaults, but still safe values. When I changed these values I reduced my
> tracked connections by 90%. I also have my hash table set to the maximum
> size, to avoid hash hits as much as possible.
>
> At one point I experimented with spliting the director and netfilter
> firewall onto seperate servers (with NFCT off on the director and no
> netfilter modules loaded). The softirq load split exactly half onto each
> server. I believe this is because IPVS and netfilter are very simular in
> their state tracking and this is the most expensive part of both netfilter
> and IPVS. Certainly getting rid of connection attempts from the most
> prolific spammers in netfilter's raw table, before it does state tracking,
> provides a huge reduction in the number of conntracks and the softirq
> load.
>
> I always put the fastest Xeon's I can get my hands on in my directors. At
> one point I had a sub optimal memory configuration, so the server was
> striping across 2 memory buses instead of 3. When I fixed the memory
> configuration I saw softirq drop by 33%, suggesting the director was
> waiting for reads from main memory most of the time.
>
> Tim
>
> >
> >> Looking at my monitoring graphs for one director when I flipped conntrack
> >> from 1 to 0 overall traffic in the peak hour stayed at 1.4Gb while softirq
> >> load on the busiest core rose from around 43% to around 62%. Average
> >> sotirq load across all cores rose from 27% to 40%. I realise these figures
> >> don't tie up with those higher up, but this is a different director with a
> >> different mix of services. I have another with no email doing 1.1Gb of
> >> traffic and only 15% softirq on the busiest core. Email is expensive to
> >> process!
Thanks for the information! It seems conntrack=1
when properly configured works better because we do not
recreate conntracks for every IPVS packet.
Regards
--
Julian Anastasov <ja@xxxxxx>
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|