LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] ipvs connections sync and CPU usage

To: "'Julian Anastasov'" <ja@xxxxxx>
Subject: Re: [lvs-users] ipvs connections sync and CPU usage
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Mon, 26 Dec 2011 15:57:57 +0200
Hello,

Thanks for the answer.

>> Is it possible to implement change schedule_timeout_interruptible via sysctl?
> May be better to implement logic with auto-adjustment.

Auto-adjustment looks much better.

> Using in_pkts for templates is not a good idea.
> As drops are possible, it can be done more often but not every time as for 
> sync version 0.
> Also, before ip_vs_conn_expire() we do not know if template life will be 
> extended.
> May be backup server should use longer timeout for templates, so that it can 
> not miss
> the sync packets during the extended period.

> So, now the question is how to properly reduce the rate of sync packets for 
> templates
> and may be for other conns when state is not changed but its life is 
> extended. I have
> to think for some time about such changes.

> Can you try such change: in ip_vs_sync_conn() comment the following two lines 
> under
> "Reduce sync rate for templates":
>       if (pkts % sysctl_sync_period(ipvs) != 1)
>               return;

> By this way we will sync templates every time a normal connection is synced, 
> as for
> version 0. It is still too often for templates but now you can try again with 
> "3 100",
> so that we can see if the difference is reduced.

> BTW, what is the persistence timeout value?

Persistence timeout is 1800 (30 min). It is application specific.

Tried the following:

Linux Kernel 2.6.39.4 + LVS Fwmark

iptables -t mangle -A PREROUTING -d VIP -i bond0 -p tcp -m multiport --dports 
80,443 -j MARK --set-mark 1

ipvsadm -A -f 1 -s wlc -p 1800
-a -f 1 -r 1.1.1.1:0 -i -w 100
-a -f 1 -r 1.1.1.2:0 -i -w 100
...
-a -f 1 -r 1.1.X.X:0 -i -w 100
(320 servers total)

# ipvsadm -l --daemon
master sync daemon (mcast=eth3, syncid=1)
backup sync daemon (mcast=eth3,syncid=1)

1. ip_vs_sync_conn original,  schedule_timeout_interruptible(HZ/10) and 
sync_threshold = "3  10"
Results: sync traffic 60 Mbit/s, 6000 packets/sec, 60 %sys CPU on Backup node,
8% difference in persistent connections between Master and Backup nodes,
netstat -s on Master SndbufErrors: 0

2. ip_vs_sync_conn patched,  schedule_timeout_interruptible(HZ/10) and 
sync_threshold = "3  10"
Results: sync traffic 100 Mbit/s, 8500 packets/sec, 93 %sys CPU on Backup node,
<1% difference in persistent connections between Master and Backup nodes,
netstat -s on Master SndbufErrors: 0

3. ip_vs_sync_conn patched,  schedule_timeout_interruptible(HZ/10) and 
sync_threshold = "3  100"
Results: sync traffic 70 Mbit/s, 6000 packets/sec, 70 %sys CPU on Backup node,
~2% difference in persistent connections between Master and Backup nodes,
netstat -s on Master SndbufErrors: 0

4. ip_vs_sync_conn patched,  schedule_timeout_interruptible(HZ/10) and 
sync_threshold = "3  200"
Results: sync traffic 66 Mbit/s, 5800 packets/sec, 66 %sys CPU on Backup node,
~3% difference in persistent connections between Master and Backup nodes,
netstat -s on Master SndbufErrors: 0

5. ip_vs_sync_conn patched,  schedule_timeout_interruptible(HZ/10) and 
sync_threshold = "3  1000"
Results: sync traffic 64 Mbit/s, 5600 packets/sec, 64 %sys CPU on Backup node,
~3% difference in persistent connections between Master and Backup nodes,
netstat -s on Master SndbufErrors: 0

In all test I can't check difference in Active and InAct connections because 
ipvsadm does not show
Active and InAct connections on Backup node for Fwmark virtual, only Persist 
connections.

There is no significant differences in sync traffic after "3  100".

>> As mentioned in another report 
>> http://www.gossamer-threads.com/lists/lvs/users/24331
>> after switching from TCP VIP to Fwmark %sys CPU is raised from 40 - 50 
>> % (TCP VIP) to 80 - 100 % (Fwmark) with no difference in sync traffic.

Could you explain why %sys CPU is raised with Fwmark? 
Could you explain why ipvsadm does not show Active and InAct connections on 
Backup node for Fwmark virtual?


Regards,
Aleksey


_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>