Hi Andy,
Glad you solved it.
I looked into this a little bit further. The problems I was having were
mostly due to the OpenBSD firewall not keeping state on those
connections that needed to be routed by that router/etherIP bridge
Configuration mistake? Did you forget a "keep state" or was it another
semantical issue?
machine. After I got that fixed traffic would show up on the cluster
node and the node would try to reply, but I would never see the return
Do you have a scrub rule?
traffic. So after doing a little further investigation the tcpdumps
showed me that the traffic needed to be fragmented because on that
bridge the mtu is 1280. So I set the mtu to 1280 on the the cluster
node and everything works.
That's why I wanted all the tcpdumps :). BTW, how did you set the mtu? I
hope you set it on routing level and not on link level, because then you
would limit all traffic through the interface to such a low mtu.
Why is your bridge on 1280?
So you can add that as another way to geographically extend the LVS.
:)
Although it is a little inefficient since all broadcasted lan traffic
gets transmitted, but that isn't a problem for me.
You can always add a blackhole route for broadcast traffic. Put a VIP
route with high prio into a separate routing table and the broadcast
traffic into another one, where you add a blackhole route. The default
gateway should then be in the VIP route table.
Cheers,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
|