LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

RE: SNAT / Masquerading problems using LVS-NAT

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: RE: SNAT / Masquerading problems using LVS-NAT
From: "Rudd, Michael" <Michael.Rudd@xxxxxxxxxxx>
Date: Thu, 12 Apr 2007 15:34:22 -0500
After some more digging it appears this is related to the OPS or One
Packet Scheduling feature. With the OPS feature turned off, the source
IP address is correctly SNATed to my VIP. With the OPS feature on and
working correctly(which we need for our UDP service), the source IP
address isn't correctly SNATed. 
 
Is anybody aware of the code for this? I assume its related to not
looking up the connection in the hash table anymore with OPS thus not
SNATing. Maybe an iptables rule coudl fix this possibly? 
 
Thanks for any help
Mike

________________________________

From: Rudd, Michael 
Sent: Tuesday, April 10, 2007 3:10 PM
To: 'LinuxVirtualServer.org users mailing list.'
Subject: RE: SNAT / Masquerading problems using LVS-NAT


Update has anyone seen this? I am basically seeing using LVS-NAT that
the return packets are not being SNATed with the LVS directors VIP but
have a source IP address of the realservers. I saw this work in the 2.4
kernel but havent been able to make it work in 2.6. Any clues?
 
Thanks
Mike

________________________________

From: Rudd, Michael 
Sent: Monday, March 19, 2007 9:10 AM
To: 'LinuxVirtualServer.org users mailing list.'
Subject: SNAT / Masquerading problems using LVS-NAT


My current setup has 1 director and 2 servers behind it. Heres the dump
from ipvsadm. 
 
[root@jackets-a sysconfig]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
UDP  192.168.67.213:domain rr ops
  -> dnsa-c:domain                Masq    1      0          110935    
  -> dnsa-d:domain                Masq    1      0          110934    
[root@jackets-a sysconfig]# 

LVS is working the way it should except return packets are not the
correct source IP address. They should be from 192.168.67.213 which is
the address of the service. Instead they are the address of the real
server. This worked in kernel 2.4 when I tested it 2 months ago. Now its
broken in my 2.6.18 kernel. 
 
Heres also a dump from ip addr. We are doing our dns traffic based on
bond1.201. 
...
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,10000> mtu 1500 qdisc noqueue 
    link/ether 00:04:23:c5:63:fc brd ff:ff:ff:ff:ff:ff
9: bond1: <BROADCAST,MULTICAST,MASTER,UP,10000> mtu 1500 qdisc noqueue 
    link/ether 00:04:23:c5:63:fd brd ff:ff:ff:ff:ff:ff
10: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
11: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
12: bond0.200@bond0: <BROADCAST,MULTICAST,MASTER,UP,10000> mtu 1500
qdisc noqueue 
    link/ether 00:04:23:c5:63:fc brd ff:ff:ff:ff:ff:ff
    inet 192.168.66.214/24 brd 192.168.66.255 scope global bond0.200
    inet 192.168.66.244/24 brd 192.168.66.255 scope global secondary
bond0.200
13: bond0.202@bond0: <BROADCAST,MULTICAST,MASTER,UP,10000> mtu 1500
qdisc noqueue 
    link/ether 00:04:23:c5:63:fc brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.104/24 brd 192.168.2.255 scope global bond0.202
    inet 192.168.2.101/24 brd 192.168.2.255 scope global secondary
bond0.202
14: bond1.201@bond1: <BROADCAST,MULTICAST,MASTER,UP,10000> mtu 1500
qdisc noqueue 
    link/ether 00:04:23:c5:63:fd brd ff:ff:ff:ff:ff:ff
    inet 192.168.67.214/24 brd 192.168.67.255 scope global bond1.201
    inet 192.168.67.213/24 brd 192.168.67.255 scope global secondary
bond1.201
[root@jackets-a sysconfig]# 
 
I've tried the ip_route_me_harder patch I found here
http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.LVS-NAT.html#brow
nfield but it doesnt appear to work correctly at least for me. Anybody
got any clues as to what broke in 2.6 for this?
 
Thanks
Mike

<Prev in Thread] Current Thread [Next in Thread>