Re: [lvs-users] ipvsadm and packets leaving a gre tunnel

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] ipvsadm and packets leaving a gre tunnel
From: "Marco Lorig" <MLORIG@xxxxxxx>
Date: Tue, 22 Jul 2008 16:03:13 +0200

addressing the server directly, without ipvsadm works fine and the pmtudisc 
seems to function properly.
It looks like Linux buffers the result of pmtudisc or at least the correct mtu 
size to reach the destination. Connecting now to the ipvsadm works also fine.
~10Mintues later, addressing the ipvsadm no longer  works and the file transfer 

Reconnecting to the server directly again, still solves the problem with 

I would be very grateful if anyone can explain this behaviour to me.

Thanks in advance



-------- Original-Nachricht --------
> Datum: Tue, 22 Jul 2008 15:10:42 +0200
> Von: "Marco Lorig" <MLORIG@xxxxxxx>
> An: " users mailing list." 
> <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
> Betreff: Re: [lvs-users] ipvsadm and packets leaving a gre tunnel

> Hi Joe
> > have you read the section in the HOWTO on MTU?
> I´ve read the section but as i understand correctly the only solution in
> a LVS-NAT env is to set the MTU by hand on each route or interface on the
> realservers, which unfortunately I haven´t access to.
> We have been running this env since 2002 with Kernel 2.4.19 with no
> problems (running director on gre_interface). Actually we have upgraded our
> systems to kernel 2.6.18 which causes this problems.
> The gre tunnel exists between the directors because of  failover reasons (
> each director is located in a seperate datacenter interconnected by WAN).
> Copying files like this: 
> Client ---scp----> Director ---GRE Tunnel nopmtudisc --> Director ----scp
> ---> Server and vice versa.
> works fine but if the ipvsadm is enabled on the second director to serve
> multiple servers, the mtu problem happens.
> thanks in advance
> Marco
> _______________________________________________
> mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to

<Prev in Thread] Current Thread [Next in Thread>