David,
The way I understand it is that as long as the hash table server pool
size does not change then the hash will always hit the same server
(irrespective of weight).
That is the way it worked on HAProxy at least, but I haven't looked at
the LVS source code or tested.
Willy did write a 'consistent hash' which works better (on HAproxy)
but we always use stick tables to be sure.
Very interested to know how your testing goes though.
This implies the hash is against the total server array size:
http://kb.linuxvirtualserver.org/wiki/Destination_Hashing_Scheduling
Which implies it would work...
I've never really understood how the overloading algo works though :-).
BTW: HAProxy can hash on the actual URL rather than just the destination IP
On 24 October 2014 16:14, David Waddell <David.Waddell@xxxxxxxxxxxxxx> wrote:
> Malcolm,
> Supposing we could define a pool of IPS for real servers before they
> exist - would we not still need persistence, to prevent requests being
> steered to a different real server, as weight was increased to being a real
> server 'online' ?
> Whether it's the hashing or a weight calculation that determines the real
> server, we would still need it to persist, which would bring us back to the
> same problem I think ?
> Thanks
> David.
> -----Original Message-----
> From: lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx
> [mailto:lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx] On Behalf Of David Waddell
> Sent: 24 October 2014 15:40
> To: LinuxVirtualServer.org users mailing list.
> Subject: Re: [lvs-users] persistence and destination IP hashing
>
> Hi Malcolm
>
> (not matt! Apologies for that).
>
> I see what you mean; we can avoid re-hashing if the pool size if fixed,
> and use weights to control 'real' membership of the pool .
>
> We're looking at auto scaling these instances in a cloud environment so
> it may not be possible to know the IP addresses of the real servers in
> advance.
> But will think that over to see if anything is possible.
>
>
> Thanks again
> David
>
> -----Original Message-----
> From: lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx
> [mailto:lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx] On Behalf Of Malcolm
> Turnbull
> Sent: 24 October 2014 15:12
> To: LinuxVirtualServer.org users mailing list.
> Subject: Re: [lvs-users] persistence and destination IP hashing
>
> David,
>
> Ah I see , my mistake I read it too quickly...
> As far as I'm aware the only thing you could try is to pre-populate the hash
> algorithm i.e.
>
> Setup 10 backend servers at the start...
> But only have X of them with a weight >0 As you need more proxies then just
> modify the weight and the dh algo should not need to re-hash?
>
> I haven't tested this on LVS but I've seen it done on HAProxy before we
> helped implement proper stick tables.
>
>
>
>
>
>
>
> On 24 October 2014 14:59, David Waddell <David.Waddell@xxxxxxxxxxxxxx> wrote:
>> Hi Matt,
>>
>> Thanks for the reply.
>>
>> We have a setup much like you describe - requests are being made
>> directly to web server, and we route these to the lb and the LB processes
>> using fwms.
>> The DH algo is working fine for us; but when we add persistence, we have
>> a problem.
>> As soon as the persistence logic establishes affinity between a proxy
>> and a web server, we cannot establish a TCP connection from any other proxy
>> to that web server.
>>
>> The CIP based persistence is overriding the DIP hashing on the path back
>> from the web server to the proxy.
>>
>> Session state on the proxy is the reason we are attempting this; if it
>> is possible to combine DH and persistence, it would resolve the problem for
>> us.
>>
>>
>> Thanks
>> David
>>
>>
>> -----Original Message-----
>> From: lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx
>> [mailto:lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx] On Behalf Of Malcolm
>> Turnbull
>> Sent: 24 October 2014 14:22
>> To: LinuxVirtualServer.org users mailing list.
>> Subject: Re: [lvs-users] persistence and destination IP hashing
>>
>> David,
>>
>> The dh scheduler only really works if the kernel can see the
>> destination address, what you need is for traffic passing through the
>> load balancer to be transparently load balanced to its destination....
>>
>> So rather than clients requesting the load balancers VIP (virtual IP)...
>> You need to change the routing so that the clients request
>> www.microsoft.com or www.google.com directly BUT these requests are
>> routed through the load balancer....
>> Then you need to tell the load balancer to transparently intercept
>> that traffic with something like:
>>
>> iptables -t mangle -A PREROUTING -p tcp --dport 80 -j MARK --set-mark
>> 1 iptables -t mangle -A PREROUTING -p tcp --dport 443 -j MARK
>> --set-mark 1 ip rule add prio 100 fwmark 1 table 100 ip route add
>> local 0/0 dev lo table 100
>>
>> The same way you would with a transparent SQUID proxy....
>>
>> Check out page 15 of the Loadbalancer.org web filter deployment guide
>> for more information about this kind of set up:
>> http://pdfs.loadbalancer.org/Web_Proxy_Deployment_Guide.pdf
>>
>>
>>
>>
>> On 24 October 2014 14:11, David Waddell <David.Waddell@xxxxxxxxxxxxxx> wrote:
>>> Hi
>>> We are trying to use LVS as a virtual load balancer around a
>>> transparent http proxy, and are wondering if it is possible to use
>>> persistence and destination hashing together
>>> (from our tests and as suggested in how-tos, persistence is CIP based,
>>> and therefore may not be so).
>>>
>>> To explain our setup a little
>>> - We have a pool of VMs running a transparent http proxy
>>> - We have setup an LVS service on either side of this proxy with
>>> - A 'request' path service that will schedule requests
>>> into the proxy, using src based hashing
>>> - A 'response' path service that will schedule the
>>> http responses through the proxy, using dst based hashing.
>>> - The http proxy maintains state around the 'session' (set of urls
>>> requested by client ip), so we wish to direct clients to the same proxy
>>> instance.
>>> Src hashing allows to achieve this, and dst hashing on the
>>> response path ensure TCP connections get established correctly.
>>>
>>> An issue arises when we try to add new instances of the proxy to the
>>> pool.
>>> The hashing changes, which breaks the statefulness (user request may
>>> go to different server).
>>> To that extent, we added persistence, which worked for requests.
>>>
>>> However, persistence on the response path is sending , perhaps as
>>> expected, TCP packets to the 'wrong' proxy instance in a lot of cases.
>>> This is because the persistence logic is using the web server IP
>>> address (the CIP on the response path).
>>>
>>> An example of the problem for us :
>>>
>>> Using 2 client ips 172.29.0.12, 172.29.0.11; real servers (http proxies)
>>> 192.168.90.58, 192.168.90.59; web server 192.168.10.17.
>>>
>>> IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port
>>> Scheduler Flags
>>> -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM
>>> 1 sh persistent 60
>>> -> 192.168.90.58:0 Route 1 0 0
>>> -> 192.168.90.59:0 Route 1 0 0 FWM 3 dh
>>> persistent 60
>>> -> 192.168.90.58:0 Route 1 0 0
>>> -> 192.168.90.59:0 Route 1 0 0
>>>
>>> FWM1 represents the request path; FMW 2 the response path.
>>>
>>> IPVS connection entries
>>> pro expire state source
>>> virtual destination
>>> pe name pe_data
>>> A) TCP 00:58 ESTABLISHED 172.29.0.12:45659
>>> 192.168.10.17:80 192.168.90.58:80
>>> B) IP 00:58 NONE 192.168.10.17:0
>>> 0.0.0.3:0 192.168.90.59:0
>>> C) TCP 01:55 FIN_WAIT 172.29.0.11:50919
>>> 192.168.10.17:80 192.168.90.59:80
>>> D) IP 00:55 NONE 172.29.0.11:0
>>> 0.0.0.1:0 192.168.90.59:0
>>> E) TCP 00:59 SYN_RECV 192.168.10.17:80
>>> 172.29.0.12:14038 192.168.90.59:14038
>>> F) TCP 01:55 FIN_WAIT 192.168.10.17:80
>>> 172.29.0.11:42443 192.168.90.59:42443
>>> G) IP 00:58 NONE 172.29.0.12:0
>>> 0.0.0.1:0 192.168.90.58:0
>>>
>>> In the example above, C and F represent a successful proxied http request,
>>> C the request from client to proxy, F the response from web server to proxy.
>>> The request C/F was made first and establishes a persistent connections
>>> from client 172.29.0.11 ->192.168.90.59 proxy and for web server
>>> 192.168.10.17 to proxy 192.168.90.59.
>>> All well.
>>>
>>> We subsequently make a request A) from client 172.29.0.11, src hashing
>>> places this correctly on proxy 192.168.90.59.
>>> The proxy then requests to web server, and the response is shown in E) -
>>> persistence directs the response to proxy 192.168.90.58, meaning a TCP
>>> connection 192.168.90.59 <-> 192.168.10.17 cannot be established.
>>>
>>> Obviously re-engineering of the proxy instances to share state would be
>>> the ideal solution, as then persistence would not be required.
>>> But we are wondering if there is any means to combine destination
>>> hashing and persistence successfully ? As currently persistence is
>>> enforcing scheduling based on src IP, even when dh is specified.
>>>
>>> Thanks
>>> David
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Please read the documentation before posting - it's available at:
>>> http://www.linuxvirtualserver.org/
>>>
>>> LinuxVirtualServer.org mailing list -
>>> lvs-users@xxxxxxxxxxxxxxxxxxxxxx Send requests to
>>> lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
>>> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>>
>>
>>
>> --
>> Regards,
>>
>> Malcolm Turnbull.
>>
>> Loadbalancer.org Ltd.
>> Phone: +44 (0)330 1604540
>> http://www.loadbalancer.org/
>>
>> _______________________________________________
>> Please read the documentation before posting - it's available at:
>> http://www.linuxvirtualserver.org/
>>
>> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
>> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
>> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>>
>> _______________________________________________
>> Please read the documentation before posting - it's available at:
>> http://www.linuxvirtualserver.org/
>>
>> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
>> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
>> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
>
>
> --
> Regards,
>
> Malcolm Turnbull.
>
> Loadbalancer.org Ltd.
> Phone: +44 (0)330 1604540
> http://www.loadbalancer.org/
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx Send
> requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx Send
> requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
--
Regards,
Malcolm Turnbull.
Loadbalancer.org Ltd.
Phone: +44 (0)330 1604540
http://www.loadbalancer.org/
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|