LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Issues with braindead network topology and LVS-NAT

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Issues with braindead network topology and LVS-NAT
From: Christian Bronk <chbr@xxxxxxxxxxx>
Date: Wed, 28 Sep 2005 11:59:57 +0200
Hi,

the "normal" Solution for your setup would be LVS-TUN, but i don´t know if 
HP-UX supports that.
Perhaps you coult try rewrite your source-ip on you lvs-box.

iptables -t nat -A POSTROUTING -p tcp -d 10.10.3.32/32:80 -j SNAT --to-source 
10.10.2.10

this could work, but in your server-logfiles you will only see the IP of your 
LVS.


Pascal Bleser wrote:
Hi everyone

This is a rather lengthy post, so thanks to everyone who can take some time to 
read (and help) ;)


I have to setup heartbeat+ipvs on a rather weird (and stupid) network topology. 
Unfortunately it
isn't subject to change (I would have done that in the first place if it was 
possible ;)).

Here's some ASCII art to outline the network topology:
--->8-------------------------------------------------
         Internet
            |
        [firewall]---[switch]
         |            |    |
     [switch]     [app1]--[app2]
      |    |
  [lvs1]--[lvs2]
--->8-------------------------------------------------

- "firewall" is a checkpoint-1 firewall, so not much voodoo can be done there 
(but currently
simulated in my lab using a Linux box (SUSE 9.3) + netfilter/iptables)
- "lvs1" and "lvs2" are two Linux boxes (SLES 9), running heartbeat and ipvs
- "app1" and "app2" are two webservers (HP-UX 11 .. *yuck*), currently 
simulated by 2 SUSE 9.3 boxes

The unusual thing is that the realservers are not "behind" the LVS cluster (as 
any sane network
admin would set it up) but in another network and the rerouted packets must go 
through the firewall
again.
Yes, I know, this *is* sick, but unfortunately it cannot be changed in the 
production environment,
for various reasons.


I'm really starting to have serious doubts that I'll be able to do that with 
LVS at all.

So, current situation:
- the firewall DNATs incoming requests from Internet clients to the VIP of the 
LVS cluster (lvs1+lvs2)
- lvs1 and lvs2 have a VIP and heartbeat works just fine, set up as hot standby
- lvs1 and lvs2 (or rather the active node, being managed as a heartbeat 
haresource) use mon to
monitor the state of app1 and app2 - action scripts not implemented yet, but 
the idea is to somehow
remove a failed node (app1 or app2) from the ipvs table
- lvs1 and lvs2 have ipvs (ipvsadm 1.24) configured using LVS-NAT, as any other 
strategy wouldn't
work in this case (not LVS-DR because they're not on the same wire, and neither 
LVS-TUN because
HP-UX sucks and it seems that it isn't capable of having IP-IP tunneling 
working properly)

(BTW, if someone has been able to successfully set up IP-IP tunneling (and 
LVS-TUN) with HP-UX on
the realservers, please let me know ;))

ipvs configuration on lvs1+lvs2 is as follows:
ipvsadm -A -t 10.10.2.10:80 -s wrr
ipvsadm -a -t 10.10.2.10:80 -r 10.10.3.32:80 -m

(I'm currently trying to get ipvs to work with just one node, already failing 
here, so there's no
2nd node for 10.10.2.10:80 yet)

Output of ipvsadm --list -n:
--->8-----------------------------------------------------------------
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.10.2.10:80 wrr
  -> 10.10.3.32:80                Masq    1      0          0
--->8-----------------------------------------------------------------
10.10.2.10 is the heartbeat-managed VIP of the LVS cluster
10.10.3.32 is the IP of one of the HP-UX webserver

Here the network topology with IP addresses included, as set up in the lab:
--->8-----------------------------------------------------------------
         [client]
        10.10.1.10
            |
        10.10.1.1
        [firewall]10.10.3.1---[switch]
        10.10.2.1              |    |
         |             10.10.3.31 10.10.3.32
     [switch]              [app1]--[app2]
      |    |
10.10.2.11 10.10.2.12
  [lvs1]--[lvs2]
   (and the cluster VIP is 10.10.2.10)
--->8-----------------------------------------------------------------

- I can properly access the VIP of the LVS cluster, as well as every single 
node individually (RIP)
- the LVS cluster nodes can ping and/or http on the firewall, the client and 
the webserver nodes
- the webserver nodes can ping the firewall, the client, both LVS nodes and the 
VIP
- the firewall is DNATing incoming packets on 10.10.1.0/24 port 80 to the LVS 
cluster VIP
(10.10.2.10) and SNATing (-j MASQUERADE, actually) outgoing traffic on the 
10.10.1.1 NIC

So, why this isn't working and why I have doubts that it's even technically 
feasible (at least with
ipvs):
[ok] incoming packets go to the LVS cluster VIP (10.10.1.10 => 10.10.2.10)
[ok] ipvs DNATs them and forwards the packets (10.10.1.10 => 10.10.3.32)
[ok] the firewall routes the packets to the webserver
[ok] the webserver gets the request and replies (10.10.3.32 => 10.10.1.10)
[**] the firewall.. well.. just routes the webserver reply packets to the client
[**] the (Linux) client says.. wtf is 10.10.3.32 :\

ipvs also gets confused because it is operating with LVS-NAT but doesn't see 
the webserver's reply
packets go through.

Setting up the default route on the webserver to be 10.10.2.10 (the LVS cluster 
VIP) is obviously
impossible, network unreachable.

For reasons beyond my understanding, even though the firewall is masquerading 
(SNAT) traffic that
goes out to the "internet" (on the 10.10.1.1 interface, see network graph 
above), the client still
sees the IP address of the webserver (and, obviously discards the martian 
packets).


As a fallback, I currently have a working solution with a simple TCP forwarder 
(rinetd). When mon
notices that a webserver fails, an action script that I wrote just rewrites 
rinetd's configuration
file and SIGHUPs rinetd. Simple, and works, but I'd really like to investigate 
further with LVS.

Maybe it's just not feasible with LVS-NAT.. ?

Thanks for any help/hints (and your time for reading).

cheers

<Prev in Thread] Current Thread [Next in Thread>