On Tue, 22 Jul 2008, Marco Lorig wrote:
How do the clients know which datacenter to route to?
The clients` (location1) default route points to the
director1, loc1. In some cases, not all (we have about 40
pairs of master/slave ipvsadm directors interconnected by
GRE tunnel ), packets are translated by iptables to reach
the correct destination (DNAT) "on the other" side
quite a setup
If director1 is the default route, then packets get to
director2 via NAT and then if the realservers there are
down, the packets come back by gre to director1?
I can check this out in my testing area but the new
production systems are blades which are limited to four
gbit nics, which are bonded (linux bonding) to two pairs
(one bonding interface to client, one to server). We use
dot1q trunking to connect to different vlans. The GRE
Interface is set to 1476 mtu and nopmtudisc (both sides)
can you set mss on this interface?
this is a little out of my area. Does nopmtudisc mean that
the source side of the gre tunnel doesn't hear about the
results of pmtu?
I´m going to test it, if it makes any difference when I
set all physical interfaces to a lower mtu.
if that works, next try settting the mtu for the route
I didn't get this. You have a route from the client to
director1, through the gre tunnel, to director2 (with no
ipvsadm rules) to the realserver? (the realserver has a
All networks are private address range.
so the client can route to the realserver then. I'm OK with
just think about two linux routers, connected by gre. One
is directly connected to the clients network, the other is
directly connected to the realservers network.
Now i start a file transfer. A client connects to the
realserver through router1 and router2. The realserver
serves the request well.
Let´s start ipvsadm on the second router to balance to n
scp-servers. If you connect directly from client to a
before running ipvsadm I take it
it works, which means something happens
to the mtu behaviour. It seems like linux "discovers" the
mtu right or at least both sides can communicate. It also
seems like linux remembers mtu issues once discoverd for
about 10minutes. Connecting now and within the 10minutes
to the ipvsadm instance
to the VIP on director2 (rather than the realserver on
director2) I assume
everything works fine.
When the timeout is reached, a connection via ipvsadm
isn´t possible anymore.
new connection or the current connection as well?
IMHO it looks like, a connection
through ipvsadm doesn´t return any or not the right
size/information for mtu. If the size/information was
discoverd before (e.g. by connection directly) it works
I´m wondering, why ipvsadm can use discovered information
but can´t discover it by itself? Haven´t found any
information about detaild mtu processing in 2.6. at all...
Any hints welcome ;)
Julian's comments in the MTU section of the HOWTO say that
Linux doesn't do PMTU on ipip tunnels in 2.6 (what it does
in 2.4 I don't know)
Marco's setup has the VIP on a interface which
receives its LVS packets via a gre tunnel. It worked for
his 2.4 setup, but stopped working (presumably due to pmtu
problems) when he switched to 2.6.
Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at http://www.wm7d.net/azproj.shtml
Homepage http://www.austintek.com/ It's GNU/Linux!