Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*\[lvs\-users\]\s+ipvs\s+connections\s+sync\s+and\s+CPU\s+usage\s*$/: 19 ]

Total 19 documents matching your query.

1. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Sat, 21 Jan 2012 22:04:13 +0200 (EET)
Hello, May be if we provide the CPU as netlink parameter we can use kthread_create+kthread_bind(,CPU)+wake_up_process instead of the kthread_run that is currently used. It does not look hard to imple
/html/lvs-users/2012-01/msg00049.html (11,910 bytes)

2. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Fri, 20 Jan 2012 18:07:24 +0200
Hello, desired number of threads. If it isn't so easy to distribute sync load, is it possible to bind sync daemon to a specific CPU? Tried the following doesn't work taskset -c 11 ipvsadm --start-dae
/html/lvs-users/2012-01/msg00045.html (11,058 bytes)

3. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Tue, 17 Jan 2012 00:13:28 +0200
Hello, There was no dropped or lost packets on both nodes. But abnormally high number of ActiveConn on the Backup server. multiple Please let me know if I can help with tests. different scalability T
/html/lvs-users/2012-01/msg00025.html (12,669 bytes)

4. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Sun, 15 Jan 2012 18:48:34 +0200 (EET)
Hello, For test 1 I think traffic is low because master syncs conn templates once per sysctl_sync_period. Here the problem was that difference for PersistConn is high if sync_period is large. If sync
/html/lvs-users/2012-01/msg00023.html (20,228 bytes)

5. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Fri, 13 Jan 2012 13:35:23 +0200
Hello Julian, I rechecked all the figures after a while. Below is average statistics for the peak hour. 1. "port 0", "HZ/10" patches on Master, "port 0", "HZ/10" patches on Backup sync_threshold = 3
/html/lvs-users/2012-01/msg00022.html (10,879 bytes)

6. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Thu, 12 Jan 2012 18:23:45 +0200
Hello Julian, I successfully patched Linux Kernel 2.6.39.4 with "port 0", "HZ/10" and "sync" patched. After reboot and transition Backup server to Master state I see increase in sync traffic and cpu
/html/lvs-users/2012-01/msg00020.html (13,864 bytes)

7. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Tue, 10 Jan 2012 23:40:56 +0200 (EET)
Hello, You can patch just the master, nothing in the backup function is changed. By default, the sync_threshold values have effect because sync_refresh_period defaults to 0. There are some difference
/html/lvs-users/2012-01/msg00016.html (10,552 bytes)

8. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Tue, 10 Jan 2012 22:20:14 +0200
Hello, I think we can start tests at this week. Before start testing I have a couple of questions: 1. Is new connection sync compatible with the previous version? So, should I patch both master and s
/html/lvs-users/2012-01/msg00015.html (9,545 bytes)

9. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Tue, 10 Jan 2012 02:47:50 +0200 (EET)
Hello, I'm appending a patch that implements such alternative/addition to the currently implemented thresholds based on packet count. I performed some simple tests but it will need good testing on re
/html/lvs-users/2012-01/msg00014.html (28,061 bytes)

10. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Wed, 28 Dec 2011 23:20:11 +0200 (EET)
Hello, Currently, sync traffic happens on received packets. After the last packet for the state, it does not matter for syncing to backup how much time we wait for next packets. Of course, with lower
/html/lvs-users/2011-12/msg00046.html (12,090 bytes)

11. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Wed, 28 Dec 2011 10:59:57 +0200
Hello! reported Rechecked connection counters after 24h. So they are more accurate. ip_vs_sync_conn patched, HZ/10, sync_threshold "3 100", "port 0 patch" PersistConn ActiveConn InActConn Master 3216
/html/lvs-users/2011-12/msg00045.html (14,144 bytes)

12. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Wed, 28 Dec 2011 03:13:54 +0200 (EET)
Hello, Not sure why difference in Active conns depends on sync period. May be master should have more inactive conns because SYN states are not synced, sync starts in EST state. Exit from EST state s
/html/lvs-users/2011-12/msg00044.html (14,900 bytes)

13. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Tue, 27 Dec 2011 14:27:51 +0200
Hello, After applying "port 0 patch" ipvsadm displays Active and InAct connection on Backup node for Fwmark virtual. Tried the following: Linux Kernel 2.6.39.4 + LVS Fwmark (configured as previously)
/html/lvs-users/2011-12/msg00043.html (13,188 bytes)

14. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Tue, 27 Dec 2011 00:03:45 +0200 (EET)
Hello, OK, this is IPIP method. It seems we have a problem with this rport 0. See the appended patch. It should allow sync-ed conns in backup to find their real server. As result, the inact/act count
/html/lvs-users/2011-12/msg00042.html (19,648 bytes)

15. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Mon, 26 Dec 2011 15:57:57 +0200
Hello, Thanks for the answer. Auto-adjustment looks much better. Persistence timeout is 1800 (30 min). It is application specific. Tried the following: Linux Kernel 2.6.39.4 + LVS Fwmark iptables -t
/html/lvs-users/2011-12/msg00041.html (13,867 bytes)

16. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Sat, 24 Dec 2011 01:42:21 +0200 (EET)
Hello, The problem is that sb_queue_tail does not know when to wakeup the master thread without knowing the socket's send space. There is always a risk to drop sync message on sending. Another option
/html/lvs-users/2011-12/msg00040.html (17,110 bytes)

17. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Fri, 23 Dec 2011 17:24:52 +0200
Hello, Thanks for the answer. Yes. net.ipv4.vs.sync_version = 1 on both nodes. There is no performance problems on Master node with schedule_timeout_interruptible(HZ/10). %sys Cpu utilization is 2 -
/html/lvs-users/2011-12/msg00039.html (13,638 bytes)

18. Re: [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Fri, 23 Dec 2011 00:11:36 +0200 (EET)
Hello, 2.6.39.4 with sync_version=1 ? I have an idea how to avoid delays/drops in master when sending the sync packets. May be we can use counter of enqueued packets and when it reaches 10 (some fixe
/html/lvs-users/2011-12/msg00038.html (11,919 bytes)

19. [lvs-users] ipvs connections sync and CPU usage (score: 1)
Author: "Aleksey Chudov" <aleksey.chudov@xxxxxxxxx>
Date: Fri, 16 Dec 2011 17:46:44 +0200
Hello! I have a pair of heavy loaded LVS servers. HW configuration: HP ProLiant DL360 G7, 2x X5675 @ 3.07GHz, 16GB RAM, Intel 82599EB 10-Gigabit NIC Linux Kernel 2.6.39.4 was patched to prevent "ip_v
/html/lvs-users/2011-12/msg00032.html (9,525 bytes)


This search system is powered by Namazu