I will have to say that I have been on this list since its conception, but
have never really contributed much. I work at Monster.com and have
run into this same problem with users on aol's network. I am sad to say
that we don't use this fine producted in production. We use a
product called the HydraWeb, which is basically the same premise as your
patch, developed on BSDi with a text-based menu system to build the
clusters. Their fix for the problem was to add a mask to the persistency
algorithm, but the mask used is a /16 (thank you aol!). We average
about 20Mbits per second through this Hydraweb to about 20 NT servers
on the back end, granted only two of the clusters has persistence turned
on with a timeout of 2 hours.
The solution to this problem should be solve on the back-end of your
clusters. You should not rely on persistence to solve all your session-state
issues. If you need to track a user it is best done with "cookies" and some
central method for the clusters to track the "cookies". This will give
you more functionality in whatever web application you use in the long
run. No telling what else you'll want the LVS to track and make a decision
on next.
Quoting Lars Marowsky-Bree (lmb@xxxxxxxxx):
> Good morning,
>
> I encountered a "nice" feature of cache clusters at a major German ISP,
> notably "T-Online". Their cache clusters appear to not do persistence, which
> means that a user may get directed to multiple proxies during a single session
> ...
>
> The effect: During a single session, the user may appear to be comeing from
> multiple source IPs, effectively running "persistence port" useless *sigh*
>
> The "solution": Either we manage to sell T-Online a Linux VirtualServer which
> would support proper persistence (though highly desireable, this is rather
> unlikely;) or I hack the LVS code to accept a netmask for the persistent port.
>
> The code would iph->saddr & netmask whenever referring to the templates, thus
> you could specify that you want to group all clients from the same /28 /24 or
> whatever to the same real server.
>
> I know this is walking a fine line between accomodating broken cache clusters
> and bundling all clients on the same real server.
>
> Comments or someone willing to do the patch? ;-)
>
>
> Sincerely,
> Lars Marowsky-Brée
>
> --
> Lars Marowsky-Brée
> Network Management
>
> teuto.net Netzdienste GmbH
>
> ----------------------------------------------------------------------
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
> For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx
--
Patrick Lynch
plynch@xxxxxxxxxx
----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx
|