LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: etherIP and lvs [Solved]

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: etherIP and lvs [Solved]
From: Andy Wettstein <awettstein@xxxxxxxx>
Date: Tue, 29 Jul 2003 13:51:06 -0500
On Thu, Jul 24, 2003 at 05:10:38PM +0200, Roberto Nibali wrote:
> Hi,
> 
> >No, to clarify I had to make the firewall rules to the clustered service
> >stateless.  I log all blocked traffic, so I would have seen it if it
> >was just getting blocked.  But it wasn't getting blocked, though, it 
> >just kind of disappeared after going on the bridge.  After I 
> >allowed traffic without keeping state from my client machine to the 
> >cluster node it started working (except for the mtu).
> 
> So you're saying that pf can't handle fragments with states?

I looked into this more.  The problem was I didn't keep state on all
interfaces that the clustered traffic would pass through on that
router/bridge.  After I put "keep state" on all interfaces that would
see that traffic it started working, so it is possible to do a stateful
firewall.

> >I set the mtu on the link level.  How do you change it at the routing
> >level?  That would definitely be desirable.  I'm trying to figure out
> 
> When you set up the route, you specify the mtu, something like this:
> 
> ip route add 192.168.0.0/24 via 10.10.10.1 dev eth1 mtu 1280
> 
> and you check it by its slow cache entry:
> 
> ip -o -s -s route show cache
> 
> It's extremely simple and straightforward.

I found an even better way.  I put this in /etc/pf.conf on both openbsd 
boxes and all traffic that gets passwd over the bridge is automatically 
changed to have a correct mss of 1240.

scrub on gif0 no-df max-mss 1240

So after this little bit of magic no changes are needed on the real
servers, and I'm pretty sure that you can have a failover director in
the other location, too, which I need to test to make sure it works 
sometime because I have a director in each location.

<Prev in Thread] Current Thread [Next in Thread>