Your CPU usage seems very high for this traffic. Quite possibly due to
different configuration you may be running.
It will be interesting to see if you haven't dropped incoming traffic
at this level of PPS. Please let us know if that is the case.
We run 2 NICs in bonding mode (Active/Passive) for both incoming and
outgoing traffic.
Our CPU level are very low ~5% (all of it is system) at 174K in and
174K out PPS. We run ldirector & heartbeat in DR mode. Hardware is
comparable to yours. We had an issue of machine dropping packets when
we were using boradcom nics. We replaced them by intel cards and
adjusted ring parameters using ethtool.
We do not do iptables on the system and that could be the one eating
up CPU, you may want to take a look at these parameters. Does kern.log
says kernel table full at such level of traffic ?
# Adjust iptables related timeout to reduce number of connection in
kernel tables
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait=10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait=10
Good luck and share your findings here on the list.
-Sashi
On Mar 2, 2009, at 9:45 PM, New User wrote:
> Hi,
>
> We are currently running dedicated LVS-NAT (with the exception of
> iptables):
> Intel Xeon 3060 @ 2.4GHz (2 cores)
> 2G RAM
> 2 Gb/s NIC (one external, one internal)
> LVS-NAT
> Linux 2.6.18 kernel
>
> It appears we are running out of CPU (usage reached 160, with 40
> idle, 2 cores) when we reach:
> ~10K CPS
> ~100K InPPS
> ~120K OutPPS
> ~1 million+ active and number of objects in the ip_vs_conn entry
> in /proc/slabinfo
>
> We are considering upgrading the setup to the following:
> 2 x Intel Xeon 5430 (2 CPUs, 4 cores each)
> 8G RAM
> 8 Gb/s NIC (4x1Gb in bonding for external, 4x1Gb in bonding for
> internal)
> LVS-NAT
> Linux 2.6.18 (possible newer if it offers performance increase)
>
> The question is how well can the LVS code take advantages of the
> multiple CPU. There are some conflicting answers to this question on
> the FAQ and mailing list (in gernal, it helps with the system as a
> whole). In our actual usage experience, it appears SMP helps, since
> it did reach 160 CPU usage. However, I do not know if this is real
> or just an illusion. I do not know if LVS is actually doing real
> work that caused the usage to be above 100 or is it because of SMP
> that causing locks that resulted that. Also, in our limited testing
> of the new setup, we find the SMP helps because with 8 NICs, just to
> process the interrupts can overwhelm a single CPU (core).
>
> Can anyone share their knowledge or experience for the following:
> 1. Can LVS increase linearly in capacity when we go from the setup
> we currently have (2 cores with 2 NICs) to what we are considering
> (8 cores, with 8 NICs)
> 2. Has anyone else used similiar setup that can share their
> experience? There was a posting to the mailing list in Dec. 2007
> that indicated the performance limit is ~450K (don't know if it's
> the sum of InPPS and OutPPS) pps. There was no conclusive answer as
> to this was the limit of LVS or not
> 3. Has anyone have a setup that can process 1+ million pps (LVS-
> NAT) on a single LVS machine (we already use multipel LVS with
> keepalived in a live-live configuration)?
>
> Thanks
>
>
>
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|