Hello,
On Wed, 5 Dec 2018, Peter Viskup wrote:
> Having client of Squid server with opened persistent connection
> causing the LVS balancing forwarding new connections to the other node
> only.
>
> LVS configuration and status:
> ~ $ ipvsadm -Ln
> IP Virtual Server version 1.2.1 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
> -> RemoteAddress:Port Forward Weight ActiveConn InActConn
> TCP 10.x.y.55:3128 lblcr
> -> 10.x.y.50:3128 Route 1 26 440
> -> 10.x.y.51:3128 Route 1 1 0
>
> Once the Squid on 51 node is restarted, the connections are
> distributed via both nodes again. After "some time" only one
> connection remain opened/active via 51 node. Always caused by the same
> client opening HTTP persistent connection to specific domain.
> Running on updated Debian9.
> ~ $ uname -a
> Linux proxy01 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04)
> x86_64 GNU/Linux
>
> Change of weight to 20 caused all the connections going through both nodes.
> ~ $ ipvsadm -Ln
> IP Virtual Server version 1.2.1 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
> -> RemoteAddress:Port Forward Weight ActiveConn InActConn
> TCP 10.x.y.55:3128 lblcr
> -> 10.x.y.50:3128 Route 20 21 248
> -> 10.x.y.51:3128 Route 20 25 42
>
> Seems that opening 20 persistent connections via one node will cause
> the similar behavior like with one connection and weight set to one.
>
> With weight of 200 set on both nodes, all connections were forwarded
> to one node only.
>
> How to understand the weight for lblcr scheduler? What is the best
> practice for balancing persistent connections?
LBLC[R] work by directing client traffic based on the destination
address. i.e. the remote web server IP. So, we try to forward every client
that browses some site to same proxy server with the idea to reuse the
cached reply. When some proxy is overloaded (number of TCP connections
exceeds the configured weight value) and we notice imbalance (other
proxy is lightly loaded with TCP conns below its weight/2) we decide to
forward client to such proxies. As result, more proxies start to cache
the same remote site.
So, you should set weight to a value that represents the max
number of established TCP connections the proxy can get simultaneously
before it is considered overloaded, eg. reaching some resource limit
such as bandwidth, memory, CPU, storage, etc. If you put too large value
you risk slowdown, delays, resets, etc.
> According the LVS How-to [1], setting the LVS persistence for VIP
> might help to let LVS forward connections across all nodes
> independently on weight value. Is this my assumption correct?
Peristence is for stickiness based on client IP/subnet.
It is more strict by definition because it is used to direct multiple
connections from same client "session" (TLS or other) to same real
server and where we should not move the client to another server. In the
case with web proxy, this is not so critical, we can reach same
remote site via more than one proxy. I don't know if there are any
benefits of using persistence for proxy setups, same clients will be
forwarded via single proxy only. As result, the remote site will be
present in more proxy servers if more clients access it.
Regards
--
Julian Anastasov <ja@xxxxxx>
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|