Eric,
LBLC & DH schedulers can only be used in a completely transparent
forwarding LVS i.e. firewall mark matches the packets of ALL routed
traffic and forwards it to a proxy/squid cache.
This is because they need to see the destination IP address (normal
non-transparent LVS traffic all has destination IP of the VIP)
i.e.
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j MARK --set-mark 1
iptables -t mangle -A PREROUTING -p tcp --dport 443 -j MARK --set-mark 1
ip rule add prio 100 fwmark 1 table 100
ip route add local 0/0 dev lo table 100
More of a description on page 16 of our load balancing web filters guide here:
http://uk.loadbalancer.org/pdffiles/Web_Proxy_Deployment_Guide.pdf
On 1 March 2013 16:34, Robinson, Eric <eric.robinson@xxxxxxxxx> wrote:
> Need some help understanding the effect of wlc, lblc, and persistent.
>
> In our environment, the following realserver configuration in ldirectord
> results in pretty even load balancing.
>
> # Virtual Server for tomcat(site103), Outside to Inside
> virtual=192.168.5.100:3103
> real=192.168.10.64:3103 masq
> real=192.168.10.63:3103 masq
> service=http
> request="/mobiledoc/jsp/catalog/xml/CheckDBConnection.jsp"
> receive="success"
> scheduler=wlc
> protocol=tcp
> checktype=3
> persistent=15
> Here's the output of ipvsadm -Ln
>
>
> TCP 192.168.5.100:3103 wlc persistent 15
> -> 192.168.10.64:3103 Masq 1 24 15
> -> 192.168.10.63:3103 Masq 1 23 22
>
> Whereas the following config results in very uneven load balancing.
>
> # Virtual Server for tomcat(site077), Outside to Inside
> virtual=192.168.5.100:3077
> real=192.168.10.64:3077 masq
> real=192.168.10.63:3077 masq
> service=http
> request="/mobiledoc/jsp/catalog/xml/CheckDBConnection.jsp"
> receive="success"
> scheduler=lblc
> protocol=tcp
> checktype=3
> persistent=360
> Here's the outpuut of ipvsadm -Ln
> TCP 192.168.5.100:3077 lblc persistent 360
> -> 192.168.10.63:3077 Masq 1 16 9
> -> 192.168.10.64:3077 Masq 1 1 0
> Why is the second one so imbalanced? There are probably 20 computers using
> the second one, so about half of them should be going to each realserver.
> Once they connect to a realserver, the connection should persist for at least
> 360 seconds. But that should not result in such a disparity in the number of
> connections going to each RS should it?
>
> Wait... if the customer is behind NAT, and all of their workstations appear
> to be the same source IP, I guess it would cause this behavior?
>
> --
> Eric Robinson
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Disclaimer - March 1, 2013
> This email and any files transmitted with it are confidential and intended
> solely for lvs-users@xxxxxxxxxxxxxxxxxxxxxxx If you are not the named
> addressee you should not disseminate, distribute, copy or alter this email.
> Any views or opinions presented in this email are solely those of the author
> and might not represent those of Physicians' Managed Care or Physician Select
> Management. Warning: Although Physicians' Managed Care or Physician Select
> Management has taken reasonable precautions to ensure no viruses are present
> in this email, the company cannot accept responsibility for any loss or
> damage arising from the use of this email or attachments.
> This disclaimer was added by Policy Patrol: http://www.policypatrol.com/
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
--
Regards,
Malcolm Turnbull.
Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|