| 
 
Hello Matthew,
Well, well, well ... are you trying to get the IP packets confused with 
your setup? :) 
 
--- Director #1
[root@lb1 linux]# ip addr show
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
    link/ether 00:13:72:f8:7e:1c brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:13:72:f8:7e:1a brd ff:ff:ff:ff:ff:ff
    inet 74.52.166.34/28 brd 74.52.166.47 scope global eth1
 
So this means we have scope global for ~.32 up to ~.47. Everything else 
will be routed to the default gateway (DGW). 
 
    inet 74.52.166.35/32 brd 74.52.166.35 scope global eth1:35
    inet6 fe80::213:72ff:fef8:7e1a/64 scope link
       valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
5: tunl0: <NOARP> mtu 1480 qdisc noop
    link/ipip 0.0.0.0 brd 0.0.0.0
[root@lb1 linux]# ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default
 
Ok, standard setup.
 
[root@lb1 linux]# ip route show
74.52.166.35 dev eth1  scope link  src 74.52.166.35
74.52.166.32/28 dev eth1  proto kernel  scope link  src 74.52.166.34
169.254.0.0/16 dev eth1  scope link
default via 74.52.166.33 dev eth1
 
Ok, packets for your RS will be sent to your DGW ~.33, which I'll call 
DGW-1. 
 
---- Slave #1:
[root@wwwdb1 ~]# ip addr show
1: lo: <LOOPBACK,NOARP,UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
    inet 74.52.166.35/32 brd 74.52.166.35 scope global lo:35
 
Ok, so if ip_forward is disabled on the slaves, you only need to set the 
arp_* flags for lo and all in proc-fs. 
 
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:13:72:f8:7e:09 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:13:72:f8:7e:07 brd ff:ff:ff:ff:ff:ff
    inet 74.52.166.50/28 brd 74.52.166.63 scope global eth1
 
Ok, this means we have scope global for ~.48 up to ~.63. Everything else 
will be routed to the DGW. 
 
[root@wwwdb1 ~]# ip route show
74.52.166.48/28 dev eth1  proto kernel  scope link  src 74.52.166.50
169.254.0.0/16 dev eth1  scope link
default via 74.52.166.49 dev eth1
 
Oups, here we have DGW-2, which is ~.49. I wonder if you really have so 
many routers accepting those packets. 
 
---- Slave #2
3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:13:72:f8:80:61 brd ff:ff:ff:ff:ff:ff
    inet 74.52.166.130/28 brd 74.52.166.143 scope global eth1
 
Now this means we have scope global for ~.128 up to ~.143. Everything 
else will be routed to the DGW. 
 
[root@wwwdb2 ~]# ip route show
74.52.166.128/28 dev eth1  proto kernel  scope link  src 74.52.166.130
169.254.0.0/16 dev lo  scope link
default via 74.52.166.129 dev eth1
 
And to make the whole forwarding more interesting for the stack, let's 
have yet another DGW, DGW-3, which will send packets out of the above 
scope to ~.129. 
 
Only on the director:
for i in filter nat mangle; do
  iptables -t $i -L -n;
done
 
  All chains on all three tables are completely empty.
 
Very well.
 
70.241.143.240 is a machine outside or inside of this cluster setup?
 
    Outside machine. My office/home comp to be exact.
 
Perfect.
 
Anything else I can try?
 
echo 42 > /proc/sys/net/ipv4/vs/debug_level
 
I don't have /vs/debug_level I'm guessing I need to recompile 
something? I'm running RHEL4 and the IPVS modules were already compiled 
in /lib/modules
 
Let's not go there yet. To me your setup looks a bit broken with regard 
to packet forwarding. It might work using some quirks, but it's bound to 
be fragile towards engineering changes. You have 3 realms on 3 servers 
and 3 different DGWs. From what I've seen, you seem to "own" a /24 class 
 --> 72.52.166.0/24. You might either want to: 
a) Set your netmasks for the the RIP to /24 or
b) Put your RIPs inside the same scope for all servers
Now, there's normally only one DGW, which in your case should be ~.33. I 
hope this is the advertised DGW of your hosting partner. Unless you need 
direct remote access to your load balancer, there is probably no need to 
give it a DGW, but let's leave it. I reckon you should then set up your 
servers as follows: 
Director:
---------
RIP = eth1    72.52.166.34/27
VIP = eth1:35 72.52.166.35/32
DGW = eth1    72.52.166.33
Slave 1:
--------
RIP = eth1  72.52.166.41/27
VIP = lo:35 72.52.166.35/32
DGW = eth1  72.52.166.33
Slave 2:
--------
RIP = eth1  72.52.166.42/27
VIP = lo:35 72.52.166.35/32
DGW = eth1  72.52.166.33
I hope this will work for you.
Best regards,
Roberto Nibali, ratz
--
echo 
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc 
 |