LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] ipvsadm and packets leaving a gre tunnel

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] ipvsadm and packets leaving a gre tunnel
From: "Marco Lorig" <MLORIG@xxxxxxx>
Date: Tue, 22 Jul 2008 18:13:57 +0200
-------- Original-Nachricht --------
> Datum: Tue, 22 Jul 2008 08:25:50 -0700 (PDT)
> Von: Joseph Mack NA3T <jmack@xxxxxxxx>

> neat.
> 
> How do the clients know which datacenter to route to?

The clients` (location1) default route points to the director1, loc1. In some 
cases, not all (we have about 40 pairs of master/slave ipvsadm directors 
interconnected by GRE tunnel ), packets are translated by iptables to reach the 
correct destination (DNAT) "on the other" side (location).
 

> can you set mss on this interface?
> 
> ip_vs() does all sorts of things to the interface. I don't 
> expect anyone has tried LVS on a gre interface.
> 
> (Any code fixes aren't likely to arrive in time to help 
> here.) What if you use two nics, one for the gre tunnel with 
> mss set and one for ipvsadm?

I can check this out in my testing area but the new production systems are 
blades which are limited to  four gbit nics, which are bonded (linux bonding) 
to two pairs (one bonding interface to client, one to server). We use dot1q 
trunking to connect to different vlans. The GRE Interface is set to 1476 mtu 
and nopmtudisc (both sides)
I´m going to test it, if it makes any difference when I set all physical 
interfaces to a lower mtu.

> I didn't get this. You have a route from the client to 
> director1, through the gre tunnel, to director2 (with no 
> ipvsadm rules) to the realserver? (the realserver has a 
> public IP?)

All networks are private address range.

Client  -defaultroute -> director1 -> route via gretun to director2 / <--route 
to clients´network via gretun to director1 -- director2(connected locally to 
the realservers´network) <---defaultroute---realserver
It could be possible to upload a visio shape as a jpg to see, what´s happening 
in details, if necessary.

just think about two linux routers, connected by gre. One is directly connected 
to the clients network, the other is directly connected to the realservers 
network.

Now i start a file transfer. A client connects to the realserver through 
router1 and router2. The realserver serves the request well.

Let´s start ipvsadm on the second router to balance to n scp-servers.
If you connect directly from client to a specific server, it works, which means 
something happens to the mtu behaviour. It seems like linux "discovers" the mtu 
right or at least both sides can communicate.
It also seems like linux remembers mtu issues once discoverd for about 
10minutes. 
Connecting now and within the 10minutes to the ipvsadm instance, everything 
works fine.

When the timeout is reached, a connection via ipvsadm isn´t possible anymore.
IMHO it looks like, a connection through ipvsadm doesn´t return any or not the 
right size/information for mtu. If the size/information was discoverd before 
(e.g. by connection directly) it works fine.

I´m wondering, why ipvsadm can use discovered information but can´t discover it 
by itself?
Haven´t found any information about detaild mtu processing in 2.6. at all...
Any hints welcome ;)

Thanks in advance.

regards Marco


<Prev in Thread] Current Thread [Next in Thread>