Re: [lvs-users] Performance issues and optimization UDP LVS-NAT

To: Marco Lorig <mlorig@xxxxxxx>
Subject: Re: [lvs-users] Performance issues and optimization UDP LVS-NAT
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <ja@xxxxxx>
Date: Tue, 17 Mar 2020 15:23:22 +0200 (EET)

On Tue, 17 Mar 2020, Marco Lorig wrote:

> Hi,
> we are running lvs for couple of years and upgraded last month to 10G
> infrastructure.
> Now, we ran into some kind of performance issues:
> Everything works fine until we reach at througput of 1600-2000Mbit/s and
> 193.906 pkt/s OUT 117.898 pkt/s IN
> Then we run into the following situation:
> CPU load average increases up to 22,
> CPU Utilization increase up to 60%
> Interface counter shows growing paket drops/discards
> Setup: lvs-nat with WLC and session-persistence 60s, ubuntu 18.04 LTS,
> HW is Dual-Socket 2x  12 core xeon GOLD 6146 @ 3,2GHz with
> Hyperthreading enabled,
> The loadbalancer is used to balance VPN UDP nat-t connections only (UDP
> 500/4500)
> I found some article about performance issues with ip_conntrack.
> On the system nf_conntrack is loaded and (apperntly) used by ip_vs.
> /proc/sys/net/ipv4/vs/conntrack is set to 0
> It looks like that some kind of table (nf?) reaches limitation.
> Any suggestions to improve performance and/or fix this issue=

        Yes, when nf_conntrack is used it would be better to
set /proc/sys/net/ipv4/vs/conntrack to 1, as reported by different
users, for example:

        In this case, you have to increase nf_conntrack_max sysctl var
to allow the desired number of conntracks to be created.

        Another option is to use NOTRACK to disable nf conntracks just for
the IPVS traffic:

iptables -t raw -A PREROUTING -p tcp -d VIP --dport VPORT -j CT --notrack

For local clients use -A OUTPUT -o lo


Julian Anastasov <ja@xxxxxx>

Please read the documentation before posting - it's available at: mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to

<Prev in Thread] Current Thread [Next in Thread>