LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: localnodes with heartbeat

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: localnodes with heartbeat
From: Jon Molin <Jon.Molin@xxxxxxxxxxx>
Date: Tue, 05 Feb 2002 19:27:42 +0100
Joseph Mack wrote:
> 
> Jon Molin wrote:
> 
> >
> > Now i want to combine this with linux-ha. So I downloaded the config
> > script and made this config file:
> 
> I've written the configure script with HA in mind, but haven't actually
> used it that way myself. Other people have setup HA with it, but I couldn't
> get them to tell what they'd done to set it up. The configure
> script is run on the director and the realservers and doesn't the "right 
> thing"
> whereever it finds itself.
> 
> The best thing (probably) is to setup an LVS with the first director
> with the other director off-line, then switch the IP's to the backup
> director and run the rc.lvs script just on the backup director.

yeah, but that requires two rc.lvs scripts, right? One for the
director/server being director and one for it being only server. 

I'm very new to this but it feels dangerous if heartbeat does something
wrong and sets both as directors. Or is this easily avoided with the
right(tm) configuration?

> 
> > And when i run this on the first localnode, all seems fine. ipvsadm -L
> > shows nice values. Then i scped the rc.lvs_dr over to the other node and
> > it started screaming.
> 
> what did it do?
well it didn't litterarly scream :) but i lost conntact with it (i ssh
to it). 


here's where things started to go wrong:
LVS realserver type vs-dr 
adding route to real-server network 255.255.255.224 
route: bogus netmask 212.75.72.63

i've no clue where it gets that netmask...it's not in the conf file...

hmm, i add the output as attachment

/Jon

> 
> Joe
> 
> --
> Joseph Mack PhD, Senior Systems Engineer, Lockheed Martin
> contractor to the National Environmental Supercomputer Center,
> mailto:mack.joseph@xxxxxxx ph# 919-541-0007, RTP, NC, USA
> 
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
looking for standard utilities
$ECHO=/bin/echo
$PING=/bin/ping -U -c 1
testing ping
ping can send one packet. is OK.
$FPING=/bin/ping -U -c 1
$IFCONFIG=/sbin/ifconfig
$NETSTAT=/bin/netstat
$ROUTE=/sbin/route
$AWK=/bin/awk
$AWK=/usr/bin/awk
$GREP=/bin/grep
$HOSTNAME_CMD=/bin/hostname
$UNAME_CMD=/bin/uname
$CAT=/bin/cat
$CUT=/bin/cut
$CUT=/usr/bin/cut
$TAIL=/usr/bin/tail
$XARGS=/usr/bin/xargs
$PS=/bin/ps
$KILL=/bin/kill
$WC=/usr/bin/wc
$TRACEROUTE=/usr/sbin/traceroute
$ARP=/sbin/arp
$ROUTE=/sbin/route
$TR=/usr/bin/tr
$EXPR=/usr/bin/expr
$CHMOD=/bin/chmod
$MV=/bin/mv
$RM=/bin/rm
$MKDIR=/bin/mkdir
$SSH=/usr/bin/ssh
$NTPD=/usr/sbin/ntpd
$IP=/sbin/ip
rc.lvs version 0.9.2 Aug 2001
(C) 2000-2001 Joseph Mack jmack@xxxxxxxx, distributed under GPL license
This file is part of the LVS project http://www.linuxvirtualserver.org
setting up www1.resfeber.se

find_System_map
System.map
$SYSTEM_MAP=/boot/System.map

$IPTABLES=/sbin/iptables
$LSMOD=/sbin/lsmod
$RMMOD=/sbin/rmmod
$INSMOD=/sbin/insmod
number nics on director 1
LVS realserver type vs-dr 
adding route to real-server network 255.255.255.224 
route: bogus netmask 212.75.72.63
Usage: route [-nNvee] [-FC] [<AF>]           List kernel routing tables
       route [-v] [-FC] {add|del|flush} ...  Modify routing table for AF.

       route {-h|--help} [<AF>]              Detailed usage syntax for 
specified AF.
       route {-V|--version}                  Display version/author and exit.

        -v, --verbose            be verbose
        -n, --numeric            don't resolve names
        -e, --extend             display other/more information
        -F, --fib                display Forwarding Information Base (default)
        -C, --cache              display routing cache instead of FIB

  <AF>=Use '-A <af>' or '--<af>'; default: inet
  List of possible address families (which support routing):
    inet (DARPA Internet) inet6 (IPv6) ax25 (AMPR AX.25) 
    netrom (AMPR NET/ROM) ipx (Novell IPX) ddp (Appletalk DDP) 
    x25 (CCITT X.25) 
installing default gw 212.75.72.1 for vs-dr
deleting current default gw 212.75.72.1
setting default gw to 212.75.72.1
showing routing table

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
212.75.72.0     0.0.0.0         255.255.255.192 U        40 0          0 eth1
192.168.0.0     0.0.0.0         255.255.255.0   U        40 0          0 eth0
127.0.0.0       0.0.0.0         255.0.0.0       U        40 0          0 lo
0.0.0.0         212.75.72.1     0.0.0.0         UG       40 0          0 eth1

checking if DEFAULT_GW 212.75.72.1 is reachable - PING 212.75.72.1 
(212.75.72.1) from 212.75.72.31 : 56(84) bytes of data.
64 bytes from 212.75.72.1: icmp_seq=0 ttl=255 time=430 usec

--- 212.75.72.1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms
good
set_realserver_ip_forwarding to OFF (1 on, 0 off).
proc/sys/net/ipv4/ip_forward 0

searching for ipchains
ipchains not loaded, good
loading ip_tables module 
find_kernel_function_name_2_4
find_kernel_name_2_4: parameter ip_tables
check_function_in_kernel
function ipt_tables not in kernel
find_module_name_2_4
find_module_name_2_4: parameter ip_tables
module name ip_tables is ip_tables
attempting to load module: ip_tables
module ip_tables already loaded 
setting default policy to ACCEPT for LVS devices
clearing iptables/ipchain rules 
showing iptables nat rules 
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
showing iptables rules 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
correct VIP entry in /etc/iproute2/rt_tables: 200 VIP
deleting ip rule from 212.75.72.15
correct RIP entry in /etc/iproute2/rt_tables: 201 RIP
deleting ip rule from 212.75.72.31 to 255.255.255.224/25
deleting ip rule from 212.75.72.31
ip rules (RIP and VIP tables should be empty) 
0:      from all lookup local 
32766:  from all lookup main 
32767:  from all lookup 253 
route table name<->id translation table:
#
# reserved values
#
#255    local
#254    main
#253    default
#0      unspec
#
# local
#
#1      inr.ruhep
200 VIP
201 RIP

clearing priority routing table
deleting table VIP
ip route table VIP already empty
routing table VIP should be empty: 
deleting table RIP
ip route table RIP already empty
routing table RIP should be empty: 
device lo:110 has VIP 212.75.72.15
new VIP device == old VIP device, don't reinstall
device lo:110 has VIP 212.75.72.15 and is UP
removing 212.75.72.15 from lo:110

 
looking for DIP 212.75.72.24 
PING 212.75.72.24 (212.75.72.24) from 212.75.72.31 : 56(84) bytes of data.
64 bytes from 212.75.72.24: icmp_seq=0 ttl=255 time=291 usec

--- 212.75.72.24 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms
found, good
not local, good 

looking for VIP on director from realserver

looking for VIP on director from realserver
director is accepting packets on VIP 212.75.72.15, device eth1:1
No VIP on real-server, VIP will be on director.

pinging VIP 212.75.72.15 from RIP
server gw not on director
OK is vs-dr
LVS_TYPE is vs-dr and VIP 212.75.72.15 not in same network as RIP 
Warning: You haven't set "$ROUTER_FORWARDS"  
in the User configurable section of configure.pl, 
so we don't know if the VIP should be pingable or not.
This ping test is just extra checking and the LVS 
will work whether or not this ping test is properly setup.
If this test is properly setup and fails, 
it will help diagnose problems with your LVS.

VIP should not be pingable if forwarding is off in the box router/test_client
VIP should     be pingable if forwarding is on  in the box router/test_client

PING 212.75.72.15 (212.75.72.15) from 212.75.72.31 : 56(84) bytes of data.
From 212.75.72.31: Destination Host Unreachable

--- 212.75.72.15 ping statistics ---
1 packets transmitted, 0 packets received, +1 errors, 100% packet loss
212.75.72.15 not pingable. 
If you find that your LVS works 
AND you aren't going to be changing the routing 
on the real-server's default gw 212.75.72.1, 
then set "$ROUTER_FORWARDS"="N" 
LVS_TYPE = vs-dr, VIP device installed 
OS verion Linux-2.4.7-10custom being treated as minor version 6
lo is not a tunl device, OK
lo:110 is local, up'ing lo.
install_realserver_vip: configuring Linux 2.4.6 
ifconfig output 
lo:110    Link encap:Local Loopback  
          inet addr:212.75.72.15  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:16436  Metric:1

installing route for VIP 212.75.72.15 on device lo:110
listing routing info for VIP 212.75.72.15 
212.75.72.15    0.0.0.0         255.255.255.255 UH       40 0          0 lo
hiding interface lo:110, will not arp

Installing priority routing
lvs type vs-dr, SERVER_GW 212.75.72.1
priority rules
0:      from all lookup local 
32766:  from all lookup main 
32767:  from all lookup 253 
routes VIP
routes RIP
adding rules to RIP with priority 100 from 212.75.72.31 to 255.255.255.224/25
adding routing rules to RIP with priority 100 from 212.75.72.31 to 0/0
showing ip rules for priority 100
100:    from 212.75.72.31 to 255.255.255.224/25 lookup RIP 
100:    from 212.75.72.31 lookup RIP 
set src device for packets from VIP
packets with src=VIP are sent out eth1
ip route show table RIP (should be empty): 
no gateway - is link route: ip route add table RIP to 255.255.255.224/25 dev 
eth1 from 212.75.72.31
RTNETLINK answers: Invalid argument
route add: return_code 2
Error: table RIP already has entry: via  dev eth1
cannot overwrite entry, you'll have to delete the old one first.
current entry: set src device for packets from VIP
packets with src=VIP are sent out eth1
ip route show table RIP (should be empty): 
adding route to gateway 212.75.72.24 in table RIP
ip route add table RIP to 0/0 via 212.75.72.24 dev eth1 from 212.75.72.31
route add: return_code 0
ip route for table RIP
default via 212.75.72.24 dev eth1 
adding VIP routing rules with priority 99 to realserver.
showing ip rules for prio 99
99:     from 212.75.72.15 lookup VIP 
set src device for packets from VIP
packets with src=VIP are sent out eth1
ip route show table VIP (should be empty): 
adding route to gateway 212.75.72.1 in table VIP
ip route add table VIP to 0/0 via 212.75.72.1 dev eth1 from 212.75.72.15
route add: return_code 0
ip route for table VIP
default via 212.75.72.1 dev eth1 
removing default gw
showing priority routing
0:      from all lookup local 
99:     from 212.75.72.15 lookup VIP 
100:    from 212.75.72.31 to 255.255.255.224/25 lookup RIP 
100:    from 212.75.72.31 lookup RIP 
32766:  from all lookup main 
32767:  from all lookup 253 
212.75.72.15 dev lo  scope link  src 212.75.72.15 
212.75.72.0/26 dev eth1  proto kernel  scope link  src 212.75.72.31 
192.168.0.0/24 dev eth0  scope link 
127.0.0.0/8 dev lo  scope link 
routing for table VIP
default via 212.75.72.1 dev eth1 
routing for table RIP
default via 212.75.72.24 dev eth1 
ntpd not running, won't be restarted
not adding filter rules.
The location of the output files rc.lvs, mon.cf and ntp.conf is the default = ./
You can change this by editing the variables $rc_lvs_home, $rc_mon_home, 
$rc_ntp_home 
 

Errors: 1
Some of these errors are from tests that failed.
If you're experimenting, it's possible that the LVS will work.
If you're doing production, you can't assume that the LVS will work.

Your LVS may not be setup properly.

Warnings: 1
The configure script has encountered an unexpected situation.
Your LVS will probably run (it may not do what you want).
It would be reasonable to try your LVS first, 
since it will likely work at least partially. 

It is possible that the conf file has invalid information
(or the configure script has a bug).
If you suspect a bug in the script, please contact jmack@xxxxxxxx
or post to the LVS mailing list lvs-users@xxxxxxxxxxxxxxxxxxxxxx
(You can always run rc.lvs manually on director/real-servers.)
<Prev in Thread] Current Thread [Next in Thread>