LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] UDP load balancing (Virtual server)

To: Simon Bernard <contact@xxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] UDP load balancing (Virtual server)
Cc: lvs-users <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Sun, 25 Nov 2018 14:01:20 +0200 (EET)
        Hello,

On Tue, 20 Nov 2018, Simon Bernard wrote:

> I tried to experiment this use case and faced this behavior...
> 
> Let's say I have a client C1 with IP address : C1IP.
> A loadbalancer server LB with virtual IP address : VIP.
> 2 cluster-nodes (R1 and R2) with IP address : R1IP and R2IP.
> 
> All the UDP traffic is done using port 5685.
> To simulate traffic I run netcat(nc) on C1, R1 and R2.
> 
> ---- Configuration on LB : -----------------------
> $ ipvsadm -Ln
> 
> UDP  VIP:5685 rr persistent 1200
>    -> R1IP             Masq    1      0          0
>    -> R2IP             Masq    1      0          0
> 
> $ iptables -L -t nat -n
> Chain PREROUTING (policy ACCEPT)
> target     prot opt source               destination
> 
> Chain INPUT (policy ACCEPT)
> target     prot opt source               destination
> 
> Chain OUTPUT (policy ACCEPT)
> target     prot opt source               destination
> 
> Chain POSTROUTING (policy ACCEPT)
> target     prot opt source               destination
> MASQUERADE  all  --  R1-R2subnet/16        0.0.0.0/0
> -------------------------------------------------------
> 
> Now I send an udp packet from C1 to LB. I get message on R2
> 
> Using  "sudo ipvsadm -Lnc" on LB, I see
> 
> pro expire state       source             virtual destination
> UDP 04:54  UDP         C1IP1:5686   VIP:5685   R2IP:5685
> UDP 19:54  UDP         C1IP1:0         VIP:5685   R2IP:0
> 
> I reply from R2, all is ok (checked with tcpdump)
> 
> I check with conntrack -Ln | grep R2IP and I see nothing (ok probably 
> handled by LVS)

        Probably, IPVS sysctl var "conntrack" is set to 0 and
IPVS switches conntracks for its flying packets to notrack mode.
As result, NF conntracks are created and destroyed for every packet
which saves memory.

> 
> Now I send a packet from R1 to C1 (server initiated use case), I 
> successfully received packet on C1.
> I can see a new entry in conntrack/netfilter
> 
> udp      17 24 src=R1IP dst=C1IP sport=5685 dport=5685 [UNREPLIED] 
> src=C1IP dst=VIP sport=5685 dport=5685 mark=0 use=1
> 
> (this entry has a 30s lifetime)
> 
> If I reply in time from C1 to LB. Answer go to R1. and in conntrack I 
> can still see the entry (not in UNREPLIED state anymore)

        This should be handled by Netfilter NAT, IPVS does nothing
to connections initiated by NF NAT.

> After several exchange the state of the entry go to ASSURED and the life 
> time is now around 180s.
> 
> While I do that I checked the LVS entries using sudo ipvsadm -Lnc and C1 
> is still associated to R2 although packet go to R1 ...

        Because IPVS has connection only for R2, we do not mess
with other traffic.

> 
> Once the entry in conntrack timeout,  if I send again an UDP packet from 
> C1 to LB, R2 will receive it. (as long as the entry in LVS is alive)
> 
> (I also tested by removing entry in netfilter/conntrack using conntrack 
> -F to flush all entries instead of waiting for lifetime expiration)
> 
> I was hopping that when R1 talked to C1 (server initiated use case), the 
> entry in LVS would have been modified and would have not been handled in 
> conntrack.

        No, IPVS does not create connections just because
Netfilter confirmed the conntrack in last hook handler when
first packet reached POST_ROUTING. IPVS is designed to work
without Netfilter conntracks but still can coexist with them
with some restrictions. Still, IPVS is able to create
server-initiated connections but under control of the IPVS
SIP module, eg. based on Call-ID header.

        So, it depends on the protocol, whether the different
packets require some stickiness based on address or content.
Problems come when IPVS can not instruct Netfilter what
external IP to select for the packets to client. Bad results
can happen if we need persistence or attempt to create
many connections between same client and virtual IP port but
for different real server.

> I mean is there a way to do ? does it make sense to you ?

        For IPVS the way to go is to check
net/netfilter/ipvs/ip_vs_pe_sip.c, the conn_out method for
struct ip_vs_pe, ip_vs_new_conn_out(). In short, IPVS would
need module to implement the needed protocol requirements.
IPVS can control such connections from the FORWARD hook
where in ip_vs_new_conn_out() we select virtual service (VIP
and VPORT) and create connection from RIP:RPORT to CIP:CPORT.

Regards

--
Julian Anastasov <ja@xxxxxx>
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
<Prev in Thread] Current Thread [Next in Thread>