Re: [PATCH 2/2] ipvs: Use cond_resched_rcu_lock() helper when dumping co

To: Eric Dumazet <eric.dumazet@xxxxxxxxx>
Subject: Re: [PATCH 2/2] ipvs: Use cond_resched_rcu_lock() helper when dumping connections
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Simon Horman <horms@xxxxxxxxxxxx>, Julian Anastasov <ja@xxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, lvs-devel@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxxxxxx, netfilter-devel@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx>, Dipankar Sarma <dipankar@xxxxxxxxxx>, dhaval.giani@xxxxxxxxx
From: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Fri, 26 Apr 2013 09:30:46 -0700
On Fri, Apr 26, 2013 at 08:59:07AM -0700, Eric Dumazet wrote:
> On Fri, 2013-04-26 at 08:45 -0700, Paul E. McKenney wrote:
> > I have done some crude coccinelle patterns in the past, but they are
> > subject to false positives (from when you transfer the pointer from
> > RCU protection to reference-count protection) and also false negatives
> > (when you atomically increment some statistic unrelated to protection).
> > 
> > I could imagine maintaining a per-thread count of the number of outermost
> > RCU read-side critical sections at runtime, and then associating that
> > counter with a given pointer at rcu_dereference() time, but this would
> > require either compiler magic or an API for using a pointer returned
> > by rcu_dereference().  This API could in theory be enforced by sparse.
> > 
> > Dhaval Giani might have some ideas as well, adding him to CC.
> We had this fix the otherday, because tcp prequeue code hit this check :
> static inline struct dst_entry *skb_dst(const struct sk_buff *skb)
> {
>         /* If refdst was not refcounted, check we still are in a 
>          * rcu_read_lock section
>          */
>         WARN_ON((skb->_skb_refdst & SKB_DST_NOREF) &&
>                 !rcu_read_lock_held() &&
>                 !rcu_read_lock_bh_held());
>         return (struct dst_entry *)(skb->_skb_refdst & SKB_DST_PTRMASK);
> }
> ( 
>  )
> Problem is the rcu protected pointer was escaping the rcu lock and was
> then used in another thread.

All the way to some other thread?  That is a serious escape!  ;-)

> What would be cool (but expensive maybe) , would be to get a cookie from
> rcu_read_lock(), and check the cookie at rcu_dereference(). These
> cookies would have system wide scope to catch any kind of errors.

I suspect that your cookie and my counter are quite similar.

> Because a per thread counter would not catch following problem :
> rcu_read_lock();
> ptr = rcu_dereference(x);
> if (!ptr)
>       return NULL;
> ...
> rcu_read_unlock();
> ...
> rcu_read_lock();
> /* no reload of x, ptr might be now stale/freed */
> if (ptr->field) { ... }

Well, that is why I needed to appeal to compiler magic or an API
extension.  Either way, the pointer returned from rcu_dereference()
must be tagged with the ID of the outermost rcu_read_lock() that
covers it.  Then that tag must be checked against that of the outermost
rcu_read_lock() when it is dereferenced.  If we introduced yet another
set of RCU API members (shudder!) with which to wrapper your
ptr->field use, we could associate additional storage with the
pointer to hold the cookie/counter.  And then use sparse to enforce
use of the new API.

Of course, if we were using C++, we could define a template for pointers
returned by rcu_dereference() in order to make this work.  Now, where
did I put that asbestos suit, you know, the one with the titanium
pinstripes?  ;-)

But even that has some "interesting" issues.  To see this, let's
modify your example a bit:




        ptr = rcu_dereference(x);
        if (!ptr)
                return NULL;


        /* no reload of x, ptr might be now stale/freed */
        if (ptr->field) { ... }



Now, is it the rcu_read_lock() or the rcu_read_lock_bh() that protects
the rcu_dereference()?

                                                        Thanx, Paul

To unsubscribe from this list: send the line "unsubscribe lvs-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at

<Prev in Thread] Current Thread [Next in Thread>