I'm investigating a typical configuration for an L4 TCP load balancer using
- persistence_timeout: 120 seconds. (# LVS persistence timeout, sec)
- /sbin/ipvsadm --set 1800 120 300
- persistence_granularity: "48" for ipv6.
- lb_algo: rr (round robin)
My expectation is, all the IPs from the same /48 v6 subnet should always
reach the same real_server because of setting granularity.
However, I can see that established connections from the same /48 v6 subnet
are spread across multiple reals. (with timeouts b/w 0 seconds to less than
30 minutes which is a side-affect of setting higher timeout for TCP).
- After the persistent_timeout expires, do new connections from the same
/48 subnet get assigned to a new reals based on round-robin regardless of
whether we've existing connections already going to a specific real from
that subnet? (And this results in same /48 being distributed to multiple
- Is TCP timeout the reason for the unexpected spread of same /48
clients to multiple reals (as opposed to the expectation of 1)?
- If my previous question's answer is yes, should we always set
persistence_timeout to be higher than the TCP timeout? (because of https
traffic, session ticket etc)
Thanks in advance!
Please read the documentation before posting - it's available at:
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users