Roberto Nibali wrote:
Yes, it was actually someone else who got it working before, and he
is far too busy to assist me with the new one :)
This is the part where your manager should probably call him back :).
It was actually the manager himself who set up the first one :)
Very well, so have you searched the LVS mailing list archive for his
name? :)
No, but he did tell me earlier he hasn't really posted on here much.
He's some kind of alien that can just make things work with no effort.
Sure, but there was no indication to which state of your test
conducts your quoted output pertained to. When you say "the new load
balancer" above, you do not mean a physically different machine to
the "old load balancer", do you?
There are two load balancers, the 'old' one which works and the 'new'
one which doesn't. Here is the ipvsadm output for the new, broken
load balancer:
# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 100.1.1.2:25 wlc
-> 120.1.1.1:25 Tunnel 1 0 0
-> 120.1.1.2:25 Tunnel 1 0 0
Ok.
That's not all :). You've only shown the filter table, but I'm also
interested in the mangle table.
# iptables -t mangle --list
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Thanks.
# iproute2
bash: iproute2: command not found
It's the ip command output from the iproute2 framework I was looking
for.
This is the successor to ifconfig and route and netstat and whatnot.
The Linux world decided at one point in its history (around 1999)
that ifconfig/route/other networking setup tools are not appropriate
anymore and replaced them with the iproute2 framework. Unfortunately
the guy who started all this is a bloody genius and as such did two
things: a) completely forgot to document it, b) never told anyone
outside the kernel community about this, for years. So, if you find
time, invoke "man ip" on a recent enough Linux distribution of your
choice.
LOL
It's actually seriously tragic :).
I built this server myself and never did anything with iproute2..
so I'm guessing the answer is no. Although I do believe Debian is
evil and so I guess it could have possibly done this itself behind
my back.
Debian people hopefully do not have evil intentions, however could
pass along the output of:
ip rule show
ip route show
ip link show
ip addr show
grep -r . /proc/sys/net/ipv4/conf/*
# ip rule show
0: from all lookup 255
32766: from all lookup main
32767: from all lookup default
# ip route show
100.1.1.0/24 dev eth0 proto kernel scope link src 100.1.1.1
default via 85.158.56.1 dev eth0
Gotcha: Fortunately your manager is too busy to find this. How does it
look on the working load balancer?
# ip rule show
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
# ip route show
130.1.1.0/24 dev eth0 proto kernel scope link src 130.1.1.1
10.10.10.0/24 dev eth1 proto kernel scope link src 10.10.10.10
default via 130.1.1.254 dev eth0
# ip link show
1: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
2: plip0: <POINTOPOINT,NOARP> mtu 1500 qdisc noop qlen 10
link/ether fc:fc:fc:fc:fc:fc peer ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:76:16:12:a5 brd ff:ff:ff:ff:ff:ff
4: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast
qlen 1000
link/ether 00:b0:d0:68:7f:2b brd ff:ff:ff:ff:ff:ff
5: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: shaper0: <> mtu 1500 qdisc noop qlen 10
link/ether
7: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop
link/ether b6:e6:25:ed:c6:2d brd ff:ff:ff:ff:ff:ff
8: eql: <MASTER> mtu 576 qdisc noop qlen 5
link/slip
9: teql0: <NOARP> mtu 1500 qdisc noop qlen 100
link/void
10: tunl0: <NOARP> mtu 1480 qdisc noop
link/ipip 0.0.0.0 brd 0.0.0.0
It might be hard for the LB to send packets along this device, when
it's not up.
Okay, well its up now on my new load balancer, but definitely _not_ up
on the old load balancer which is working.
11: gre0: <NOARP> mtu 1476 qdisc noop
link/gre 0.0.0.0 brd 0.0.0.0
# ip addr show
1: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
2: plip0: <POINTOPOINT,NOARP> mtu 1500 qdisc noop qlen 10
link/ether fc:fc:fc:fc:fc:fc peer ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:04:76:16:12:a5 brd ff:ff:ff:ff:ff:ff
4: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast
qlen 1000
link/ether 00:b0:d0:68:7f:2b brd ff:ff:ff:ff:ff:ff
inet 100.1.1.1/24 brd 100.1.1.255 scope global eth0
inet 100.1.1.2/24 brd 100.1.1.255 scope global secondary eth0:0
Should be /32.
Corrected, well spotted :)
/proc/sys/net/ipv4/conf/eth0/promote_secondaries:0
/proc/sys/net/ipv4/conf/eth0/force_igmp_version:0
/proc/sys/net/ipv4/conf/eth0/disable_policy:0
/proc/sys/net/ipv4/conf/eth0/disable_xfrm:0
/proc/sys/net/ipv4/conf/eth0/arp_accept:0
/proc/sys/net/ipv4/conf/eth0/arp_ignore:0
/proc/sys/net/ipv4/conf/eth0/arp_announce:0
/proc/sys/net/ipv4/conf/eth0/arp_filter:0
/proc/sys/net/ipv4/conf/eth0/tag:0
/proc/sys/net/ipv4/conf/eth0/log_martians:0
/proc/sys/net/ipv4/conf/eth0/bootp_relay:0
/proc/sys/net/ipv4/conf/eth0/medium_id:0
/proc/sys/net/ipv4/conf/eth0/proxy_arp:0
/proc/sys/net/ipv4/conf/eth0/accept_source_route:1
/proc/sys/net/ipv4/conf/eth0/send_redirects:1
/proc/sys/net/ipv4/conf/eth0/rp_filter:0
/proc/sys/net/ipv4/conf/eth0/shared_media:1
/proc/sys/net/ipv4/conf/eth0/secure_redirects:1
/proc/sys/net/ipv4/conf/eth0/accept_redirects:1
/proc/sys/net/ipv4/conf/eth0/mc_forwarding:0
/proc/sys/net/ipv4/conf/eth0/forwarding:1
This looks sane.
/proc/sys/net/ipv4/conf/lo/promote_secondaries:0
/proc/sys/net/ipv4/conf/lo/force_igmp_version:0
/proc/sys/net/ipv4/conf/lo/disable_policy:1
/proc/sys/net/ipv4/conf/lo/disable_xfrm:1
/proc/sys/net/ipv4/conf/lo/arp_accept:0
/proc/sys/net/ipv4/conf/lo/arp_ignore:0
/proc/sys/net/ipv4/conf/lo/arp_announce:0
/proc/sys/net/ipv4/conf/lo/arp_filter:0
/proc/sys/net/ipv4/conf/lo/tag:0
/proc/sys/net/ipv4/conf/lo/log_martians:0
/proc/sys/net/ipv4/conf/lo/bootp_relay:0
/proc/sys/net/ipv4/conf/lo/medium_id:0
/proc/sys/net/ipv4/conf/lo/proxy_arp:0
/proc/sys/net/ipv4/conf/lo/accept_source_route:1
/proc/sys/net/ipv4/conf/lo/send_redirects:1
/proc/sys/net/ipv4/conf/lo/rp_filter:0
/proc/sys/net/ipv4/conf/lo/shared_media:1
/proc/sys/net/ipv4/conf/lo/secure_redirects:1
/proc/sys/net/ipv4/conf/lo/accept_redirects:1
/proc/sys/net/ipv4/conf/lo/mc_forwarding:0
/proc/sys/net/ipv4/conf/lo/forwarding:1
This as well.
This is odd, tunl0 does exist:
# ifconfig tunl0
tunl0 Link encap:IPIP Tunnel HWaddr
NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Sure, but it's not activated. Could you by any chance call following
command on your box?
ip link set dev tunl0 up
Mhmm this has been done, however I notice that on the working load
balancer, the tunl0 device is not visible in ifconfig output (i.e. is
not activated). Excuse me while I stay with my vintage ip-command
friends for a little while longer :)
Your funeral :). Seriously though, this is puzzling. Unless I'm really
badly mistaken, tunl0 should be activated in order to have traffic go
through it, no? Unfortunately, I've not set up a LVS_TUN in 8 years :).
Could you send the ip link show output from the working LB?
# ip link show
1: eth0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 100
link/ether 00:14:22:09:85:39 brd ff:ff:ff:ff:ff:ff
2: eth1: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:14:22:09:85:3a brd ff:ff:ff:ff:ff:ff
3: lo: <LOOPBACK,UP,10000> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: tunl0: <NOARP> mtu 1480 qdisc noop
link/ipip 0.0.0.0 brd 0.0.0.0
Oh one thing I forgot to mention, the working load balancer is doing
some funky stuff with keepalived, which I believe is injecting IP's into
the interfaces. The config for that is:
global_defs {
#notification_email {
# someone@xxxxxxxxxxx
#}
#notification_email_from devnull@xxxxxxxxx
#smtp_server 127.0.0.1
#smtp_connect_timeout 30
lvs_id LOAD1
}
vrrp_sync_group VG1 {
group {
VI_1
}
#smtp_alert
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
lvs_sync_daemon_interface eth1
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass blah
}
virtual_ipaddress {
130.1.1.2
}
preempt_delay 300
}
virtual_server_group VSG_1 {
130.1.1.2 25
}
virtual_server group VSG_1 {
delay_loop 6
lb_algo wlc
lb_kind TUN
# persistence_timeout 600
# persistence_granularity 255.255.255.0
protocol TCP
real_server 120.1.1.1 25 {
weight 100
SMTP_CHECK {
connect_timeout 6
retry 3
delay_before_retry 1
helo_name load1.areti.net
}
}
real_server 120.1.1.2 25 {
weight 100
SMTP_CHECK {
connect_timeout 6
retry 3
delay_before_retry 1
helo_name load1.areti.net
}
}
}
Thanks,
--
Mark Wadham
e: mark.wadham@xxxxxxxxx t: +44 (0)20 8315 5800 f: +44 (0)20 8315 5801
Areti Internet Ltd., http://www.areti.net/
===================================================================
Areti Internet Ltd: BS EN ISO 9001:2000
Providing corporate Internet solutions for more than 10 years.
===================================================================
|