In the general case we will have to solve this another way.
This can either be solved by the actual servers involved or something in front
them doing routing. The standard application server solution is to either
replicate the user session (Netscape NAS/Kiva and others, less scalable very
tolerant of server failures) or to do IO forwarding to the server that has the
session (Locomotive, more scalable less resilient).
The other way to do this is to have a layer-3 router in front of the servers
remembers which server is supposed to get certain cookies or other identifying
information. (iPivot and others support layer-3 routing.)
I had been trying to reconcile the performance and simplicity of LVS/DR with the
need sometimes to have layer-3 routing. I realized while explaining the problem
to someone yesterday that the appropriate solution is to allow LVS/DR to do fast
load balancing to another layer that handles the layer-3 routing when needed.
this second layer you could use a modified Squid.
This gives you flexibility to have appropriate fan-out when needed and
at the right places. One or two (for backup and load distribution)
with LVS/DR feeding directly to some servers and through a layer-3 router for
others would be a good solution. Much easier than trying to spoof the beginning
of a connection and trying to hand-off to do layer-3 in the kernel. These
LinuxDirectors can then talk to as many layer-3 routers as are needed to handle
the load. Squid is very efficient so this should not be a large difficulty.
In the most scalable case, you want to avoid creating bottlenecks that are not
needed. DR mode allows this very well. Needing layer-3 routing and related
session affinity is the symtom of an underdeveloped cluster session
distribution/forwarding capability. This will be the norm for a while, but we
should have an uncripled mode to allow efficient operation when the cluster
I will also point out that basing things on the IP address is problematic for
reasons than aggressive caching. For instance, I use an ISDN router with NAT
a dynamic IP address at home. After any significant pause, it hangs up and upon
reconnection will always have a new IP address. Sites like BankOfAmerica, which
uses Netscape NAS, are immune to any problems because NAS uses both a cookie and
embedded session information in forms to identify the session for a user.
Lars Marowsky-Bree wrote:
> Good morning,
> I encountered a "nice" feature of cache clusters at a major German ISP,
> notably "T-Online". Their cache clusters appear to not do persistence, which
> means that a user may get directed to multiple proxies during a single session
> The effect: During a single session, the user may appear to be comeing from
> multiple source IPs, effectively running "persistence port" useless *sigh*
> The "solution": Either we manage to sell T-Online a Linux VirtualServer which
> would support proper persistence (though highly desireable, this is rather
> unlikely;) or I hack the LVS code to accept a netmask for the persistent port.
> The code would iph->saddr & netmask whenever referring to the templates, thus
> you could specify that you want to group all clients from the same /28 /24 or
> whatever to the same real server.
> I know this is walking a fine line between accomodating broken cache clusters
> and bundling all clients on the same real server.
> Comments or someone willing to do the patch? ;-)
> Lars Marowsky-Brée
> Lars Marowsky-Brée
> Network Management
> teuto.net Netzdienste GmbH
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
> For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx
OptimaLogic - Finding Optimal Solutions Web/Crypto/OO/Unix/Comm/Video/DBMS
sdw@xxxxxxx Stephen D. Williams Senior Consultant/Architect http://sdw.st
43392 Wayside Cir,Ashburn,VA 20147-4622 703-724-0118W 703-995-0407Fax 5Jan1999
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx