RE: Netfilter connection tracking support for IPVS

To: Nicklas Bondesson <nicklas.bondesson@xxxxxxxxxxxx>
Subject: RE: Netfilter connection tracking support for IPVS
Cc: "' users mailing list.'" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Sat, 24 Feb 2007 02:06:08 +0200 (EET)

On Sat, 24 Feb 2007, Nicklas Bondesson wrote:

> I have scenarios like this:
> Request:
> CLIENT -> VIP[with_public_ip_1] -> A_REAL_SERVER[private_ip_1]
> Response:
> A_REAL_SERVER[private_ip_1] -> VIP[with_public_ip_1] -> CLIENT
> ---
> Request:
> CLIENT -> VIP[with_public_ip_2] -> A_REAL_SERVER[private_ip_2]
> Response:
> A_REAL_SERVER[private_ip_2] -> VIP[with_public_ip_2] -> CLIENT

        Aha, I see why you are using snat_reroute. But I want to
note the following things:

- you need to set snat_reroute only if you have ip rules with source
address where packets from VIP1 and VIP2 don't go to same nexthop.
If you have only one possible gateway then the kernel has already
attached this GW to the packet at routing time, so there is no need to
waste CPU to try to reroute it somewhere else by VIP if there is no 
other alternative gateway.

- you don't need iptables SNAT rules to SNAT traffic because 
netfilter will not reroute it. Netfilter simply does not bind to
nexthop for NAT connections. Also, you can not expect IPVS packets to 
reach netfilter in POST_ROUTING, the SNAT rule will not see them.

> I'm not sure if i'm beeing clear here, but in simple words: the same public
> ip address that the client uses to connect to the LVS should be used as
> source ip in the response to the client.
> I have multiple public ip addresses that i need to source nat.

        ok, but what do you see, what is the real problem? Packets are
dropped and don't reach uplink router or they are not routed properly
when you have 2 or more uplinks? Do you have source-based IP rules?

> The firewall is on the same box as the director.
> Any pointers?
> Thanks,
> Nicklas


Julian Anastasov <ja@xxxxxx>

<Prev in Thread] Current Thread [Next in Thread>