LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: direct routing/gateway issues.

To: Joseph Mack <mack@xxxxxxxxxxx>
Subject: Re: direct routing/gateway issues.
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: tc lewis <tcl@xxxxxxxxx>
Date: Tue, 18 Jul 2000 17:25:56 -0400 (EDT)
below...


> > this isn't working for me / i'm missing something.
> > here's the setup again, in detail:
> 
> my configure script makes this a lot easier
> (it's on the
> Documentation page). 

nod, saw it.  i'm going to use horms' ultramonkey stuff eventually i think
so i chose to hold off on using any special config stuff just yet while i
test this stuff out.


> Here's the conf file for what you've set up
> 
> #lvs_dr.conf (C) Joseph Mack mack@xxxxxxxxxxx
> LVS_TYPE=VS_DR
> INITIAL_STATE=on
> VIP=eth0:132 lvs 255.255.255.255 lvs
> DIRECTOR_INSIDEIP=eth2 director-inside 192.168.1.0 255.255.255.0 192.168.1.255
> DIRECTOR_DEFAULT_GW=router
> SERVICE=t telnet rr realserver1 realserver2
> SERVER_VIP_DEVICE=TP
> SERVER_NET_DEVICE=eth1
> SERVER_DEFAULT_GW=router
> #----------end lvs_dr.conf------------------------------------
> 
> where your /etc/hosts (or DNS) is the following
> 
> 64.208.49.1   router
> 64.208.49.132 lvs
> 192.168.100.130       director-inside
> 192.168.100.99        realserver1
> 192.168.100.98        realserver2
> 
> My configure script fails when it's run (although it's a bizarre failure
> and not one I'd anticipated).  The problem is that your VIP is not in the
> realserver network. Since you're using TP for the VIP on the realserver
> network, I didn't have a device in the 64.x.x.x network on the realserver
> and couldn't route to the outside world from the realservers. 
> 
> I haven't played with realservers for VS-DR having private network IPs.
> Make sure this isn't going to be a problem for your local routing.
> I'd put the VIP and the realservers into the same network for a start.

so my VIP and RIPs have to be on the same network, in essense meaning i
need a public ip address for each real server?  ick.

or if i used a different method like putting the VIP on an lo or eth
interface and hiding it, would that help?  is there a way to arp hide
something like eth1:0, or do i have to hide the entire eth1 device?  sigh.

there must be a way!



> Since I'm about to dive out the door to Ottawa, I won't be able to do a
> 2nd round on this question till Sunday or so.
> 
> Good luck.

thanks!  enjoy your trip/conference/stuff.

-tcl.



> > 
> > [client]
> >    |
> > [router] 64.208.49.1
> >    |
> > [director] 64.208.49.130 (DIP) on eth1
> >            64.208.49.132 (VIP) on eth1:0
> >            192.168.100.130 on eth2
> >    |
> >    |
> > [real server 1] 192.168.100.99 on eth1
> >                 gw 64.208.49.1
> > [real server 2] 192.168.100.98 on eth1
> >                 gw 64.208.49.1
> > 
> > 
> > both interfaces of the director and real servers are connected to the same
> > switch--there's no physical segmentation there.  i'm using lvs-dr and
> > horms' ipchains method on the real servers to avoid the arp problem.
> > here's how i do it:
> > 
> > on the director:
> > /sbin/ifconfig eth1:0 64.208.49.132 netmask 255.255.255.0 up
> > /sbin/route add -host 64.208.49.132 dev eth1
> > /bin/echo 1 > /proc/sys/net/ipv4/ip_forward
> > /usr/sbin/ipvsadm -A -t 64.208.49.132:80 -s lc
> > /usr/sbin/ipvsadm -a -t 64.208.49.132:80 -r 192.168.100.99:80 -g
> > /usr/sbin/ipvsadm -a -t 64.208.49.132:80 -r 192.168.100.98:80 -g
> > 
> > on the real servers:
> > /sbin/route add -net 64.208.49.0 netmask 255.255.255.0 dev eth1
> > /sbin/route add -net 0.0.0.0 gw 64.208.49.1
> > /bin/echo 1 > /proc/sys/net/ipv4/ip_forward
> > /sbin/ipchains -A input -d 64.208.49.132 80 -p tcp -j REDIRECT 4080
> > 
> > 
> > eth0 is not used on any of these machines.
> > 
> > i have another machine there that will eventually be used as a failover
> > director, but right now it's just chilling out.  it's 64.208.49.131 on
> > eth1 and 192.168.100.131 on eth2.  if i use this machine as the client,
> > everything works fine.  http responses are received.  life is wonderful.
> > if i use a client outside of the local network, things break.
> > 
> > for example, if i try from 208.219.36.67, tcpdump shows stuff like:
> > 
> > director:
> > 12:18:36.700977 eth1 B arp who-has 64.208.49.1 tell 192.168.100.99
> > 12:18:36.700985 eth2 B arp who-has 64.208.49.1 tell 192.168.100.99
> > 12:18:37.574251 eth1 < 208.219.36.67.62374 > 64.208.49.132.www: S
> > 98091686:98091686(0) win 32120 <mss 1460,sackOK,timestamp 161199526
> > 0,nop,wscale 0> (DF)
> > 12:18:37.574282 eth2 > 208.219.36.67.62374 > 64.208.49.132.www: S
> > 98091686:98091686(0) win 32120 <mss 1460,sackOK,timestamp 161199526
> > 0,nop,wscale 0> (DF)
> > 12:18:37.649761 eth1 < 192.168.100.98 > 64.208.49.132: icmp: host
> > 208.219.36.67 unreachable [tos 0xc0]
> > 12:18:37.649831 eth2 > 192.168.100.98 > 64.208.49.132: icmp: host
> > 208.219.36.67 unreachable [tos 0xc0]
> > 
> > 
> > real server:
> > 12:17:52.999514 eth1 B arp who-has 64.208.49.1 tell 192.168.100.98
> > 12:17:52.999521 eth2 B arp who-has 64.208.49.1 tell 192.168.100.98
> > 12:17:53.550585 eth1 > 192.168.100.99 > 64.208.49.132: icmp: host
> > 208.219.36.67 unreachable [tos 0xc0]
> > 12:17:53.550598 eth1 > 192.168.100.99 > 64.208.49.132: icmp: host
> > 208.219.36.67 unreachable [tos 0xc0]
> > 
> > 
> > other real server:
> > 12:17:55.251106 eth1 B arp who-has 64.208.49.1 tell 192.168.100.99
> > 12:17:55.251114 eth2 B arp who-has 64.208.49.1 tell 192.168.100.99
> > 12:17:56.124482 eth1 < 208.219.36.67.62374 > 64.208.49.132.www: S
> > 98091686:98091686(0) win 32120 <mss 1460,sackOK,timestamp 161199526
> > 0,nop,wscale 0> (DF)
> > 12:17:56.199872 eth1 > 192.168.100.98 > 64.208.49.132: icmp: host
> > 208.219.36.67 unreachable [tos 0xc0]
> > 
> > 
> > assumingly because the real servers can't contact the client.
> > 
> > route -n on real server:
> > Kernel IP routing table
> > Destination     Gateway         Genmask         Flags Metric Ref    Use 
> > Iface
> > 192.168.100.99  0.0.0.0         255.255.255.255 UH    0      0        0 eth1
> > 192.168.200.99  0.0.0.0         255.255.255.255 UH    0      0        0 eth2
> > 192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 eth1
> > 64.208.49.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
> > 192.168.200.0   0.0.0.0         255.255.255.0   U     0      0        0 eth2
> > 127.0.0.0       0.0.0.0         255.0.0.0       U     0      0        0 lo
> > 0.0.0.0         64.208.49.1     0.0.0.0         UG    0      0        0 eth1
> > 
> > 
> > the other real server's is similar.
> > (that 192.168.200.0/24 net is another backend private net for db
> > interaction, pay no attention to it).
> > 
> > i was assuming that this would be ok.  the outgoing packets from the real
> > servers should have a source ip of the VIP, which the router would see and
> > forward to the outside world appropriately, correct?  apparently this
> > isn't the case.  what am i missing here?  the router is not setup to know
> > about 192.168.100.0/24, but i didn't think it would have to be.  does it?
> > what am i missing?
> > 
> > advice appreciated!
> > 
> > -tcl.
> > 
> > 
> > 
> > ---------- Forwarded message ----------
> > Date: Tue, 11 Jul 2000 14:17:18 -0400 (EDT)
> > From: tc lewis <tcl@xxxxxxxxx>
> > To: Ian S. McLeod <ian@xxxxxxxxxxx>
> > Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> > Subject: Re: will this work (direct routing)?
> > 
> > 
> > ok, cool.  gotcha on the other traffic being dropped thing.  i could
> > always just throw a separate box in there entirely to masquerade the real
> > servers.  hmm but then everything would be forwarded through that box,
> > which is a needless extra hop for web traffic, so yeah that leads back to
> > what you were saying about source-based forwarding.  that's no big deal
> > for me at this point.  the real servers shouldn't need to get outside of
> > the internal network except for lvs-forwarded traffic (http requests).
> > nevertheless, thanks for the heads up on that in case i go down that road
> > later.
> > 
> > in rh6.2, "ip" (/sbin/ip) and related tools are in the "iproute" package.
> > 
> > -tcl.
> > 
> > 
> > On Tue, 11 Jul 2000, Ian S. McLeod wrote:
> > 
> > > This should work.  However, attempts to connect directly to the outside
> > > internet from the Real Servers will most likely fail.  Why?  Because they
> > > will forward packets to the gateway with a source address inside of a
> > > private IP range (192.168) which the router will drop.
> > > 
> > > As best I can tell, the only way to solve this problem is to have the LVS
> > > servers double as masquerading gateways and use source based routing on
> > > the Real Servers such that:
> > > 
> > > Packets with a source address of the VIP go directly to the "real"
> > > gateway, achieving the performance benefits of DR.
> > > 
> > > Packets with a source address inside of 192.168 are routed to the
> > > masquerading gateway on the LVS boxes.
> > > 
> > > 
> > > When I last investigated this the only way to do source based routing on
> > > Linux was with the "ip" command (which I can't find in any recent
> > > distributions).  Anyone know where it went?
> > > 
> > > -Ian
> > > 
> > > On Tue, 11 Jul 2000, tc lewis wrote:
> > > 
> > > > 
> > > > here's what i'm thinking i can do:
> > > > 
> > > > 200.200.200.1 = router
> > > > (whatever, some publically-accessable ip range...)
> > > > 200.200.200.11 = lvs balancer 1.
> > > > 200.200.200.12 = lvs balancer 2.
> > > > route 192.168.100.0/255.255.255.0 added to both balancers (not sure if 
> > > > this is even necessary)
> > > > 192.168.100.101 = real server 1.
> > > > 192.168.100.102 = real server 2.
> > > > route 200.200.200.0/255.255.255.0 added to both real servers.
> > > > gateway on real servers = 200.200.200.1
> > > > 
> > > > 2 balancers that fail over via heartbeat/ultramonkey.
> > > > 
> > > > i'd like to do balancing on port 80 with the direct routing method.  
> > > > i'll
> > > > probably use ipchains on the real servers to solve the arp problem as 
> > > > i'll
> > > > probably be redirecting port 80 to some non-priviledged port on the real
> > > > server anyway (8080, whatever).  the machines listed above will not be
> > > > physically segmented--they'll all be on the same vlan of a foundry
> > > > workgroup network switch.
> > > > 
> > > > will this work?  if they're on the same physical segment like this then
> > > > the balancers should be able to redirect traffic properly via direct
> > > > routing, and the real servers can then send back out to the real world
> > > > with that 200.200.200.0 route through the .1 gateway.
> > > > 
> > > > am i correct or am i missing something here?
> > > > 
> > > > sorry, it's been a while since i've done much with lvs, so i just 
> > > > wanted a
> > > > quick confirmation.  thanks!
> > > > 
> > > > -tcl.
> > > > 



<Prev in Thread] Current Thread [Next in Thread>