LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] ipvs connections sync and CPU usage

To: Aleksey Chudov <aleksey.chudov@xxxxxxxxx>
Subject: Re: [lvs-users] ipvs connections sync and CPU usage
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <ja@xxxxxx>
Date: Fri, 23 Dec 2011 00:11:36 +0200 (EET)
        Hello,

On Fri, 16 Dec 2011, Aleksey Chudov wrote:

> Linux Kernel 2.6.39.4 was patched to prevent "ip_vs_send_async error" as
> previously discussed in

        2.6.39.4 with sync_version=1 ?

> http://archive.linuxvirtualserver.org/html/lvs-users/2009-12/msg00058.html

        I have an idea how to avoid delays/drops in master when
sending the sync packets. May be we can use counter of
enqueued packets and when it reaches 10 (some fixed value)
we can call wake_up_process(), so that we can wakeup
the sending process which sleeps 1 second after every send.
By this way we will prevent overflow of the socket's sending
buffer (the ip_vs_send_async error message). I can prepare
patch in the following days.

> ipvs_master and ipvs_backup running all the time with the following options
> 
> master sync daemon (mcast=eth3, syncid=1)
> 
> backup sync daemon (mcast=eth3, syncid=1)
> 
>  
> 
> With the default sysctl parameters net.ipv4.vs.sync_threshold = "3  50"
> 
> there is 60% difference in persistent connections between Master and Backup
> nodes.

        Persistent connections are synced too, with a rate
depending on the sync period (2nd value). This is on new
kernels supporting sync proto v1 (2.6.39+). Older kernels
may be sync the templates every time the controlled
connection is synced. With default period, it means they
send up to 50 times more traffic to sync persistent
connections.

> After lowering sync_threshold to "2  10" the difference was less than 9%.
> 
>  
> 
> ipvs_backup process on Backup node use 35% on single CPU core0 with "3  50"
> 
> ipvs_backup process on Backup node use 50% on single CPU core0 with "2  10"

        May be the key here is to use some large value
for the sysctl_sync_period (the 2nd of the values). Keep
first value 2 or 3 and try different values for the period.
For example, 100, 1000. It depends on how many packets have
the connections.

> I any case looks like ipvs_backup process will be a bottleneck soon.

        I see

> Is it possible to lower cpu usage of ipvs_backup?

        It was designed as single thread and it is
expected that the sync traffic should be lower than
traffic in master. Only if many backup threads are
started we can utilize more cores but such change
will lead to changes in user interface, there is
also small risk due to possible packet reordering.

> Is it possible to distribute cpu usage of ipvs_backup on multiple CPU cores?

        Currently, it is not possible. Any progress to
decrease the load with tuning sync period? What is the
packet rate and MBytes of the sync traffic to backup?
You can check it with sar -n DEV 1 5.

> Best regards,
> 
> Aleksey

Regards

--
Julian Anastasov <ja@xxxxxx>

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>