LVS
lvs-devel
Google
 
Web LinuxVirtualServer.org

Re: [PATCH RFC] ipvs: add genetlink cmd to dump all services and destina

To: Cezar Sá Espinola <cezarsa@xxxxxxxxx>
Subject: Re: [PATCH RFC] ipvs: add genetlink cmd to dump all services and destinations
Cc: Wensong Zhang <wensong@xxxxxxxxxxxx>, Simon Horman <horms@xxxxxxxxxxxx>, "open list:IPVS" <netdev@xxxxxxxxxxxxxxx>, "open list:IPVS" <lvs-devel@xxxxxxxxxxxxxxx>, open list <linux-kernel@xxxxxxxxxxxxxxx>, "open list:NETFILTER" <netfilter-devel@xxxxxxxxxxxxxxx>, "open list:NETFILTER" <coreteam@xxxxxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Tue, 3 Nov 2020 21:18:57 +0200 (EET)
        Hello,

On Tue, 3 Nov 2020, Cezar Sá Espinola wrote:

> >         And now what happens if all dests can not fit in a packet?
> > We should start next packet with the same svc? And then
> > user space should merge the dests when multiple packets
> > start with same service?
> 
> My (maybe not so great) idea was to avoid repeating the svc on each
> packet. It's possible for a packet to start with a destination and
> user space must consider then as belonging to the last svc received on
> the previous packet. The comparison "ctx->last_svc != svc" was
> intended to ensure that a packet only starts with destinations if the
> current service is the same as the svc we sent on the previous packet.

        You can also consider the idea of having 3 coordinates
for start svc: idx_svc_tab (0 or 1), idx_svc_row (0..IP_VS_SVC_TAB_SIZE-1)
and idx_svc for index in row's chain. On new packet this will
indicate the htable and its row and we have to skip svcs in
this row to find our starting svc. I think, this will still fit in
the netlink_callback's args area. If not, we can always kmalloc
our context in args[0]. In single table, this should speedup
the start svc lookup 128 times in average (we have 256 rows).
In setup with 1024 svcs (average 4 in each of the 256 rows)
we should skip these 0..3 entries instead of 512 in average.

> >         last_svc is used out of __ip_vs_mutex region,
> > so it is not safe. We can get a reference count but this
> > is bad if user space blocks.
> 
> I thought it would be relatively safe to store a pointer to the last
> svc since I would only use it for pointer comparison and never
> dereferencing it. But in retrospect it does look unsafe and fragile
> and could probably lead to errors especially if services are modified
> during a dump causing the stored pointer to point to a different
> service.

        Yes, nobody is using such pointers. We should create
packets that correctly identify svc for the dests. The drawback
is that user space may need more work for merging. We can always
create a sorted array of pointers to svcs, so that we can binary
search with bsearch() the svc from every received packet. Then we
will know if this is a new svc or an old one (with dests in
multiple packets). Should we also check for dest duplicates in
the svc? The question is how much safe we should play. In
user space the max work we can do is to avoid duplicates
and to put dests to their correct svc.

> >         But even if we use just indexes it should be ok.
> > If multiple agents are used in parallel it is not our
> > problem. What can happen is that we can send duplicates
> > or to skip entries (both svcs and dests). It is impossible
> > to keep any kind of references to current entries or even
> > keys to lookup them if another agent can remove them.
> 
> Got it. I noticed this behavior while writing this patch and even
> created a few crude validation scripts running parallel agents and
> checking the diff in [1].

        Ok, make sure your tests cover cases with multiple
dests, so that single service occupies multiple packets,
I'm not sure if 100 dests fit in one packet or not.

Regards

--
Julian Anastasov <ja@xxxxxx>
<Prev in Thread] Current Thread [Next in Thread>