LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Large HTTP GET/POST timeout

To: Julian Anastasov <ja@xxxxxx>
Subject: Re: Large HTTP GET/POST timeout
Cc: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Joseph Mack <mack.joseph@xxxxxxx>
Date: Wed, 02 Jun 2004 05:58:58 -0400
Julian Anastasov wrote:
> 
>         Hello,
> 
> On Mon, 31 May 2004, Joseph Mack wrote:
> 
> > > > so what is the MTU doing in the output of `ip addr show dev tunl0`?
> > > >
> > > > I can set it (can't I?). Is the mtu meaningless, ignored, what?
> > >
> > >         It is ignored for IPVS traffic, IPVS has its own encapsulation
> > > and uses the route to RIP. IIRC you do not need to configure tunl0
> > > in director.
> >
> > So PMTU cannot work for ip_vs?
> 
>         I mean tunl0 is usually needed to receive IPIP packets, so
> in normal cases you do not need such interface in director even
> when using TUN real servers. The PMTU setting must be for the route
> to RIP. Such setting (and special route to daddr=RIP) can be needed
> only if PMTU to RIP is less than the outdev MTU.

OK so Ratz's work-around should work?

if you put a tunl0 device on the director, would it receive the PMTU
packets back from the realserver?


> > In that case Ratz idea of setting the mtu for the route (DIP->RIP) won't 
> > work?
> 
>         It is needed but it is not on tunl0, eg. it is via eth0

yes, understand
 
> > So should people set the MTU on the VIP for an LVS-Tun director?
> 
>         The VIP does not play here. The forwarded traffic is
> routed to daddr=RIP (as for the other forwarding methods). Only the
> clients need a route to VIP.

I was thinking to reduce the MTU for the CIP-VIP segment, then
there would be no problem in the DIP-RIP segment. Is this a way
of handling it?

Joe

-- 
Joseph Mack PhD, High Performance Computing & Scientific Visualization
SAIC, Supporting the EPA Research Triangle Park, NC 919-541-0007
Federal Contact - John B. Smith 919-541-1087 - smith.johnb@xxxxxxx
<Prev in Thread] Current Thread [Next in Thread>