LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

problem with lvs-NAT setup: accesses hang

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx, piranha-list@xxxxxxxxxx
Subject: problem with lvs-NAT setup: accesses hang
From: Alois Treindl <alois@xxxxxxxx>
Date: Sun, 29 Apr 2001 11:34:01 +0200 (METDST)
I have setup LV-NAT, using piranha-0.5.3-9
However, all accesses hang.

I describe my configuration in all details:

This is the lvs.cf file:
------------------------
primary = 192.53.104.2
service = lvs
rsh_command = rsh
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = nat
nat_router = 10.0.0.254 eth0
nat_nmask = 255.255.255.0
debug_level = NONE
virtual http_nat {
     active = 1
     address = 192.53.104.3 eth1:0
     vip_nmask = 255.255.255.0
     port = 80
     send = "GET / HTTP/1.0\r\n\r\n"
     expect = "HTTP"
     load_monitor = none
     scheduler = wlc
     protocol = tcp
     timeout = 6
     reentry = 15
     server w1 {
         address = 10.0.0.1
         active = 1
         weight = 1000
     }
     server w2 {
         address = 10.0.0.2
         active = 1
         weight = 1000
     }
     server w3 {
         address = 10.0.0.3
         active = 1
         weight = 1000
     }
}

Linux environment:
------------------
I use kernel 2.2.19 with ipvs ipvs-1.0.7
and ipvsadm ipvsadm-1.15
(on a Reghat 7.0 base).

The Piranha web GUI shows:
----------------------
CURRENT LVS ROUTING TABLE
         IP Virtual Server version 1.0.7 (size=65536)                   
 Prot LocalAddress:Port Scheduler Flags                         
   -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
 TCP  w1.astro.ch:www wlc
   -> w2:www                      Masq    1000   0          1         
   -> w1:www                      Masq    1000   0          0    

ipchains -L says:
-----------------
 /sbin/ipchains -L
Chain input (policy ACCEPT):
Chain forward (policy DENY):
target     prot opt     source                destination           ports
MASQ       all  ------  10.0.0.0/24          0.0.0.0               n/a
Chain output (policy ACCEPT):   
(this ipchains entry was created by manually running
 /sbin/ipchains -A forward -j MASQ -s 10.0.0.0/24 -d 0.0.0.0 
before lvs was started)


ifconfig says:
---------------
eth0      Link encap:Ethernet  HWaddr 00:B0:D0:B0:AA:85  
          inet addr:10.0.0.254  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:10853 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13721 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100 
          Interrupt:11 Base address:0x1000 

eth1      Link encap:Ethernet  HWaddr 00:B0:D0:B0:AA:86  
          inet addr:192.53.104.2  Bcast:192.53.104.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:141387 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17852 errors:0 dropped:0 overruns:0 carrier:0
          collisions:74 txqueuelen:100 
          Interrupt:10 Base address:0x3000 

eth1:0    Link encap:Ethernet  HWaddr 00:B0:D0:B0:AA:86  
          inet addr:192.53.104.3  Bcast:192.53.104.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:10 Base address:0x3000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:3924  Metric:1
          RX packets:29 errors:0 dropped:0 overruns:0 frame:0
          TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 


However, I cannot connect to hhtp on the VIP address 192.53.104.3
from outside.

I can connect to each real server, when I do it from the director.
On the director, I can also connect to http://192.53.104.3, the VIP
and get load balanced replies from the real servers (as observed via
the http log file son the real servers).

From outside the LVS cluster, however, every access to VIP:80 hangs.

I can see the connections in the piranha monitor and with ipvsadm,
until they time out after a few minutes.

Any ideas what the problem is?

Should I also post my kernel .config file?

Alois



<Prev in Thread] Current Thread [Next in Thread>