Hello, May be if we provide the CPU as netlink parameter we can use kthread_create+kthread_bind(,CPU)+wake_up_process instead of the kthread_run that is currently used. It does not look hard to imple
Hello, desired number of threads. If it isn't so easy to distribute sync load, is it possible to bind sync daemon to a specific CPU? Tried the following doesn't work taskset -c 11 ipvsadm --start-dae
Hello, There was no dropped or lost packets on both nodes. But abnormally high number of ActiveConn on the Backup server. multiple Please let me know if I can help with tests. different scalability T
Hello, For test 1 I think traffic is low because master syncs conn templates once per sysctl_sync_period. Here the problem was that difference for PersistConn is high if sync_period is large. If sync
Hello Julian, I rechecked all the figures after a while. Below is average statistics for the peak hour. 1. "port 0", "HZ/10" patches on Master, "port 0", "HZ/10" patches on Backup sync_threshold = 3
Hello Julian, I successfully patched Linux Kernel 2.6.39.4 with "port 0", "HZ/10" and "sync" patched. After reboot and transition Backup server to Master state I see increase in sync traffic and cpu
Hello, You can patch just the master, nothing in the backup function is changed. By default, the sync_threshold values have effect because sync_refresh_period defaults to 0. There are some difference
Hello, I think we can start tests at this week. Before start testing I have a couple of questions: 1. Is new connection sync compatible with the previous version? So, should I patch both master and s
Hello, I'm appending a patch that implements such alternative/addition to the currently implemented thresholds based on packet count. I performed some simple tests but it will need good testing on re
Hello, Currently, sync traffic happens on received packets. After the last packet for the state, it does not matter for syncing to backup how much time we wait for next packets. Of course, with lower
Hello, Not sure why difference in Active conns depends on sync period. May be master should have more inactive conns because SYN states are not synced, sync starts in EST state. Exit from EST state s
Hello, After applying "port 0 patch" ipvsadm displays Active and InAct connection on Backup node for Fwmark virtual. Tried the following: Linux Kernel 2.6.39.4 + LVS Fwmark (configured as previously)
Hello, OK, this is IPIP method. It seems we have a problem with this rport 0. See the appended patch. It should allow sync-ed conns in backup to find their real server. As result, the inact/act count
Hello, Thanks for the answer. Auto-adjustment looks much better. Persistence timeout is 1800 (30 min). It is application specific. Tried the following: Linux Kernel 2.6.39.4 + LVS Fwmark iptables -t
Hello, The problem is that sb_queue_tail does not know when to wakeup the master thread without knowing the socket's send space. There is always a risk to drop sync message on sending. Another option
Hello, Thanks for the answer. Yes. net.ipv4.vs.sync_version = 1 on both nodes. There is no performance problems on Master node with schedule_timeout_interruptible(HZ/10). %sys Cpu utilization is 2 -
Hello, 2.6.39.4 with sync_version=1 ? I have an idea how to avoid delays/drops in master when sending the sync packets. May be we can use counter of enqueued packets and when it reaches 10 (some fixe
Hello! I have a pair of heavy loaded LVS servers. HW configuration: HP ProLiant DL360 G7, 2x X5675 @ 3.07GHz, 16GB RAM, Intel 82599EB 10-Gigabit NIC Linux Kernel 2.6.39.4 was patched to prevent "ip_v