LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: binding 2 persistence rules/routes...

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>, Roberto Nibali <ratz@xxxxxxxxxxxx>
Subject: Re: binding 2 persistence rules/routes...
From: Joseph Mack <mack.joseph@xxxxxxx>
Date: Thu, 10 Jul 2003 08:12:06 -0400
Joseph Mack wrote:
> 
> Roberto Nibali wrote:
> 
> >
> > You have a broken :) service which you would like to load balance with 
> > persistence.
> 
> If it wasn't broken, what would it do?


I mean, would it be a single port service, or how do you design a non-broken
multi-port service?

Joe

> > Normal state of affairs
> > -----------------------
> > client A sends http request to VIP:80 and gets to RIP1:80
> > client A receives some new port information (31337) in the reply
> > client A sends some request to VIP:31337 and gets to RIP2:31337 where the
> > service of course will not have opened the port because RIP1 did.
> >
> > Port 0 state of affairs
> > -----------------------
> > client A sends http request to VIP:80 and gets to RIP1:80
> > client A receives some new port information (31337) in the reply
> > client A sends some request to VIP:31337 and gets to RIP1:31337 which is 
> > very
> > happily accepting the connection because it told the client A to connect to 
> > this
> > port.
> 
> --
> Joseph Mack PhD, High Performance Computing & Scientific Visualization
> SAIC, Supporting the EPA Research Triangle Park, NC 919-541-0007
> Federal Contact - John B. Smith 919-541-1087 - smith.johnb@xxxxxxx
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users

-- 
Joseph Mack PhD, High Performance Computing & Scientific Visualization
SAIC, Supporting the EPA Research Triangle Park, NC 919-541-0007
Federal Contact - John B. Smith 919-541-1087 - smith.johnb@xxxxxxx
<Prev in Thread] Current Thread [Next in Thread>