LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] ipvs connections sync and CPU usage

To: "'Julian Anastasov'" <ja@xxxxxx>
Subject: Re: [lvs-users] ipvs connections sync and CPU usage
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Tue, 27 Dec 2011 14:27:51 +0200
Hello,

>> Linux Kernel 2.6.39.4 + LVS Fwmark
>> 
>> iptables -t mangle -A PREROUTING -d VIP -i bond0 -p tcp -m multiport 
>> --dports 80,443 -j MARK --set-mark 1
>> 
>> ipvsadm -A -f 1 -s wlc -p 1800
>> -a -f 1 -r 1.1.1.1:0 -i -w 100
>> -a -f 1 -r 1.1.1.2:0 -i -w 100
>> ...
>> -a -f 1 -r 1.1.X.X:0 -i -w 100

> OK, this is IPIP method. It seems we have a problem with this rport 0. 
> See the appended patch.
> It should allow sync-ed conns in backup to find their real > server. 
> As result, the inact/act counters
> should work again, CPU usage should be lower because before now we 
> fail to bind to real server
> for every sync message for the connection.

>> In all test I can't check difference in Active and InAct connections 
>> because ipvsadm does not show Active and InAct connections on Backup 
>> node for Fwmark virtual,
>> only Persist connections.

>> Could you explain why %sys CPU is raised with Fwmark? 
>> Could you explain why ipvsadm does not show Active and InAct 
>> connections on Backup node for Fwmark virtual?

> Yes, we try to bind to dest for every sync message without success 
> because conns come with dport=80/443
> while real server port is 0. Only the template conns find the server
> because they have rport 0. I hope the appended 
> patch should fix it. How better is the CPU then?

> Subject: [PATCH] ipvs: try also real server with port 0 in backup 
> server We should not forget to try for real server with port 0 in the 
> backup server when processing the sync message.
> We should do it in all cases because the backup server can use 
> different forwarding method.

After applying "port 0 patch" ipvsadm displays Active and InAct connection
on Backup node for Fwmark virtual.


Tried the following:

Linux Kernel 2.6.39.4 + LVS Fwmark (configured as previously)

1. ip_vs_sync_conn original, (HZ/10) on both, sync_threshold "3 10" on both,
"port 0 patch"
Results: sync traffic 50 Mbit/s, 4000 packets/sec, 30 %sys CPU on Backup, 8%
diff in Persistent, 2% diff in Active, SndbufErrors: 0

2. ip_vs_sync_conn patched, (HZ/10) on both, sync_threshold "3  100" on
both, "port 0 patch"
Results: sync traffic 60 Mbit/s, 5000 packets/sec, 40 %sys CPU on Backup, 8%
diff in Persistent, 8% diff in Active, SndbufErrors: 0

3. ip_vs_sync_conn patched, (HZ/10) on both, sync_threshold "3  10" on both,
"port 0 patch"
Results: sync traffic 90 Mbit/s, 8000 packets/sec, 60 %sys CPU on Backup, 2%
diff in Persistent, 2% diff in Active, SndbufErrors: 0

So, looks like "port 0 patch" fix Fwmark connections and CPU usage issues.

To reduce the  difference in persistent and active connections we should use
ip_vs_sync_conn pathed + "3 10" but %sys CPU is still too high.

Are there any advantages in the reduction of persistence connection timeouts
and tcp, tcpfin timeouts?


Regards,
Aleksey



_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>