LVS
lvs-devel
Google
 
Web LinuxVirtualServer.org

Re: [PATCH] netfilter/ipvs: immediately expire UDP connections matching

To: Marco Angaroni <marcoangaroni@xxxxxxxxx>
Subject: Re: [PATCH] netfilter/ipvs: immediately expire UDP connections matching unavailable destination if expire_nodest_conn=1
Cc: Julian Anastasov <ja@xxxxxx>, "David S. Miller" <davem@xxxxxxxxxxxxx>, Alexey Kuznetsov <kuznet@xxxxxxxxxxxxx>, Hideaki YOSHIFUJI <yoshfuji@xxxxxxxxxxxxxx>, Wensong Zhang <wensong@xxxxxxxxxxxx>, Simon Horman <horms@xxxxxxxxxxxx>, Jakub Kicinski <kuba@xxxxxxxxxx>, Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx>, Jozsef Kadlecsik <kadlec@xxxxxxxxxxxxx>, Florian Westphal <fw@xxxxxxxxx>, "open list:IPVS" <netdev@xxxxxxxxxxxxxxx>, "open list:IPVS" <lvs-devel@xxxxxxxxxxxxxxx>, open list <linux-kernel@xxxxxxxxxxxxxxx>, "open list:NETFILTER" <netfilter-devel@xxxxxxxxxxxxxxx>, "open list:NETFILTER" <coreteam@xxxxxxxxxxxxx>
From: Andrew Kim <kim.andrewsy@xxxxxxxxx>
Date: Tue, 19 May 2020 10:18:45 -0400
Hi Marco,

> could you please confirm if/how this patch is changing any of the?
> following behaviours, which I’m listing below as per my understanding?

This patch would optimize b) where the chances of packets being silently
dropped is lower.

> However I'm confused about the references to OPS mode.
> And why you need to expire all the connections at once: if you expire
> on a per connection basis, the client experiences the same behaviour
> (no more re-transmissions), but you avoid the complexities of a new
> thread.

I agree for TCP the client would experience the same behavior. This is mostly
problematic for UDP when there are many entries in the connection hash
from the same client to 1 destination. If we only expire connections
on receiving
packets, there can be many dropped packets before all entries are expired (or
we reached the UDP timeout). Expiring all UDP packets immediately would
reduce the number of packets dropped significantly.

Thanks,

Andrew Sy Kim

On Tue, May 19, 2020 at 7:46 AM Marco Angaroni <marcoangaroni@xxxxxxxxx> wrote:
>
> Hi Andrew, Julian,
>
> could you please confirm if/how this patch is changing any of the
> following behaviours, which I’m listing below as per my understanding
> ?
>
> When expire_nodest is set and real-server is unavailable, at the
> moment the following happens to a packet going through IPVS:
>
> a) TCP (or other connection-oriented protocols):
>    the packet is silently dropped, then the following retransmission
> causes the generation of a RST from the load-balancer to the client,
> which will then re-open a new TCP connection
> b) UDP:
>    the packet is silently dropped, then the following retransmission
> is rescheduled to a new real-server
> c) UDP in OPS mode:
>    the packet is rescheduled to a new real-server, as no previous
> connection exists in IPVS connection table, and a new OPS connection
> is created (but it lasts only the time to transmit the packet)
> d) UDP in OPS mode + persistent-template:
>    the packet is rescheduled to a new real-server, as previous
> template-connection is invalidated, a new template-connection is
> created, and a new OPS connection is created (but it lasts only the
> time to transmit the packet)
>
> It seems to me that you are trying to optimize case a) and b),
> avoiding the first step where the packet is silently dropped and
> consequently avoiding the retransmission.
> And contextually expire also all the other connections pointing to the
> unavailable real-sever.
>
> However I'm confused about the references to OPS mode.
> And why you need to expire all the connections at once: if you expire
> on a per connection basis, the client experiences the same behaviour
> (no more re-transmissions), but you avoid the complexities of a new
> thread.
>
> Maybe also the documentation of expire_nodest_conn sysctl should be updated.
> When it's stated:
>
>         If this feature is enabled, the load balancer will expire the
>         connection immediately when a packet arrives and its
>         destination server is not available, then the client program
>         will be notified that the connection is closed
>
> I think it should be at least "and the client program" instead of
> "then the client program".
> Or a more detailed explanation.
>
> Thanks
> Marco Angaroni
>
>
> Il giorno lun 18 mag 2020 alle ore 22:06 Andrew Kim
> <kim.andrewsy@xxxxxxxxx> ha scritto:
> >
> > Hi Julian,
> >
> > Thank you for getting back to me. I will update the patch based on
> > your feedback shortly.
> >
> > Regards,
> >
> > Andrew
> >
> > On Mon, May 18, 2020 at 3:10 PM Julian Anastasov <ja@xxxxxx> wrote:
> > >
> > >
> > >         Hello,
> > >
> > > On Sun, 17 May 2020, Andrew Sy Kim wrote:
> > >
> > > > If expire_nodest_conn=1 and a UDP destination is deleted, IPVS should
> > > > also expire all matching connections immiediately instead of waiting for
> > > > the next matching packet. This is particulary useful when there are a
> > > > lot of packets coming from a few number of clients. Those clients are
> > > > likely to match against existing entries if a source port in the
> > > > connection hash is reused. When the number of entries in the connection
> > > > tracker is large, we can significantly reduce the number of dropped
> > > > packets by expiring all connections upon deletion.
> > > >
> > > > Signed-off-by: Andrew Sy Kim <kim.andrewsy@xxxxxxxxx>
> > > > ---
> > > >  include/net/ip_vs.h             |  7 ++++++
> > > >  net/netfilter/ipvs/ip_vs_conn.c | 38 +++++++++++++++++++++++++++++++++
> > > >  net/netfilter/ipvs/ip_vs_core.c |  5 -----
> > > >  net/netfilter/ipvs/ip_vs_ctl.c  |  9 ++++++++
> > > >  4 files changed, 54 insertions(+), 5 deletions(-)
> > > >
> > >
> > > > diff --git a/net/netfilter/ipvs/ip_vs_conn.c 
> > > > b/net/netfilter/ipvs/ip_vs_conn.c
> > > > index 02f2f636798d..c69dfbbc3416 100644
> > > > --- a/net/netfilter/ipvs/ip_vs_conn.c
> > > > +++ b/net/netfilter/ipvs/ip_vs_conn.c
> > > > @@ -1366,6 +1366,44 @@ static void ip_vs_conn_flush(struct netns_ipvs 
> > > > *ipvs)
> > > >               goto flush_again;
> > > >       }
> > > >  }
> > > > +
> > > > +/*   Flush all the connection entries in the ip_vs_conn_tab with a
> > > > + *   matching destination.
> > > > + */
> > > > +void ip_vs_conn_flush_dest(struct netns_ipvs *ipvs, struct ip_vs_dest 
> > > > *dest)
> > > > +{
> > > > +     int idx;
> > > > +     struct ip_vs_conn *cp, *cp_c;
> > > > +
> > > > +     rcu_read_lock();
> > > > +     for (idx = 0; idx < ip_vs_conn_tab_size; idx++) {
> > > > +             hlist_for_each_entry_rcu(cp, &ip_vs_conn_tab[idx], 
> > > > c_list) {
> > > > +                     if (cp->ipvs != ipvs)
> > > > +                             continue;
> > > > +
> > > > +                     if (cp->dest != dest)
> > > > +                             continue;
> > > > +
> > > > +                     /* As timers are expired in LIFO order, restart
> > > > +                      * the timer of controlling connection first, so
> > > > +                      * that it is expired after us.
> > > > +                      */
> > > > +                     cp_c = cp->control;
> > > > +                     /* cp->control is valid only with reference to cp 
> > > > */
> > > > +                     if (cp_c && __ip_vs_conn_get(cp)) {
> > > > +                             IP_VS_DBG(4, "del controlling 
> > > > connection\n");
> > > > +                             ip_vs_conn_expire_now(cp_c);
> > > > +                             __ip_vs_conn_put(cp);
> > > > +                     }
> > > > +                     IP_VS_DBG(4, "del connection\n");
> > > > +                     ip_vs_conn_expire_now(cp);
> > > > +             }
> > > > +             cond_resched_rcu();
> > >
> > >         Such kind of loop is correct if done in another context:
> > >
> > > 1. kthread
> > > or
> > > 2. delayed work: mod_delayed_work(system_long_wq, ...)
> > >
> > >         Otherwise cond_resched_rcu() can schedule() while holding
> > > __ip_vs_mutex. Also, it will add long delay if many dests are
> > > removed.
> > >
> > >         If such loop analyzes instead all cp->dest for
> > > IP_VS_DEST_F_AVAILABLE, it should be done after calling
> > > __ip_vs_conn_get().
> > >
> > > >  static int sysctl_snat_reroute(struct netns_ipvs *ipvs) { return 0; }
> > > > diff --git a/net/netfilter/ipvs/ip_vs_ctl.c 
> > > > b/net/netfilter/ipvs/ip_vs_ctl.c
> > > > index 8d14a1acbc37..f87c03622874 100644
> > > > --- a/net/netfilter/ipvs/ip_vs_ctl.c
> > > > +++ b/net/netfilter/ipvs/ip_vs_ctl.c
> > > > @@ -1225,6 +1225,15 @@ ip_vs_del_dest(struct ip_vs_service *svc, struct 
> > > > ip_vs_dest_user_kern *udest)
> > > >        */
> > > >       __ip_vs_del_dest(svc->ipvs, dest, false);
> > > >
> > > > +     /*      If expire_nodest_conn is enabled and protocol is UDP,
> > > > +      *      attempt best effort flush of all connections with this
> > > > +      *      destination.
> > > > +      */
> > > > +     if (sysctl_expire_nodest_conn(svc->ipvs) &&
> > > > +         dest->protocol == IPPROTO_UDP) {
> > > > +             ip_vs_conn_flush_dest(svc->ipvs, dest);
> > >
> > >         Above work should be scheduled from __ip_vs_del_dest().
> > > Check for UDP is not needed, sysctl_expire_nodest_conn() is for
> > > all protocols.
> > >
> > >         If the flushing is complex to implement, we can still allow
> > > rescheduling for unavailable dests:
> > >
> > > - first we should move this block above the ip_vs_try_to_schedule()
> > > block because:
> > >
> > >         1. the scheduling does not return unavailabel dests, even
> > >         for persistence, so no need to check new connections for
> > >         the flag
> > >
> > >         2. it will allow to create new connection if dest for
> > >         existing connection is unavailable
> > >
> > >         if (cp && cp->dest && !(cp->dest->flags & 
> > > IP_VS_DEST_F_AVAILABLE)) {
> > >                 /* the destination server is not available */
> > >
> > >                 if (sysctl_expire_nodest_conn(ipvs)) {
> > >                         bool uses_ct = ip_vs_conn_uses_conntrack(cp, skb);
> > >
> > >                         ip_vs_conn_expire_now(cp);
> > >                         __ip_vs_conn_put(cp);
> > >                         if (uses_ct)
> > >                                 return NF_DROP;
> > >                         cp = NULL;
> > >                 } else {
> > >                         __ip_vs_conn_put(cp);
> > >                         return NF_DROP;
> > >                 }
> > >         }
> > >
> > >         if (unlikely(!cp)) {
> > >                 int v;
> > >
> > >                 if (!ip_vs_try_to_schedule(ipvs, af, skb, pd, &v, &cp, 
> > > &iph))
> > >                         return v;
> > >         }
> > >
> > >         Before now, we always waited one jiffie connection to expire,
> > > now one packet will:
> > >
> > > - schedule expiration for existing connection with unavailable dest,
> > > as before
> > >
> > > - create new connection to available destination that will be found
> > > first in lists. But it can work only when sysctl var "conntrack" is 0,
> > > we do not want to create two netfilter conntracks to different
> > > real servers.
> > >
> > >         Note that we intentionally removed the timer_pending() check
> > > because we can not see existing ONE_PACKET connections in table.
> > >
> > > Regards
> > >
> > > --
> > > Julian Anastasov <ja@xxxxxx>

<Prev in Thread] Current Thread [Next in Thread>