Thanks, I have added some text which I will include in v2. -- To unsubscribe from this list: send the line "unsubscribe lvs-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo
Hello, I see, CONFIG_PREEMPT_RCU depends on CONFIG_PREEMPT OK Yes, thanks for the explanation! Simon, so lets do it as suggested by Eric and Paul: rcu_read_unlock(); cond_resched(); rcu_read_lock();
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Mon, 29 Apr 2013 14:30:02 -0700
Yep, I really did intend to say "#ifndef CONFIG_PREEMPT_RCU". A couple of things to keep in mind: 1. Although rcu_read_unlock() does map to preempt_enable() for CONFIG_TINY_RCU and CONFIG_TREE_RCU, t
Hello, Hm, is this correct? If I follow the ifdefs preempt_schedule is called when CONFIG_PREEMPT is defined _and_ CONFIG_PREEMPT_RCU is not defined. Your example for CONFIG_PREEMPT_RCU is the opposi
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Sat, 27 Apr 2013 09:20:49 -0700
I would instead suggest something like: But yes, in the CONFIG_PREEMPT_RCU case, the cond_resched() is not needed. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe lvs-devel"
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Sat, 27 Apr 2013 09:17:32 -0700
;-) ;-) ;-) I must confess that I would prefer a somewhat less heavy-handed approach. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe lvs-devel" in the body of a message to m
Hello, So, I assume, to help realtime kernels and rcu_barrier it is not a good idea to guard rcu_read_unlock with checks. I see that rcu_read_unlock will try to reschedule in the !CONFIG_PREEMPT_RCU
You just know that's going to be _so_ popular ;-) -- To unsubscribe from this list: send the line "unsubscribe lvs-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Fri, 26 Apr 2013 12:04:28 -0700
A call to rcu_barrier() only blocks on already-queued RCU callbacks, so if there are no RCU callbacks queued in the system, it need not block at all. But it might need to wait on some callbacks, and
One question : If some thread(s) is(are) calling rcu_barrier() and waiting we exit from rcu_read_lock() section, is need_resched() enough for allowing to break the section ? If not, maybe we should n
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Fri, 26 Apr 2013 10:48:16 -0700
Don't get me wrong, I am not opposing cond_resched_rcu_lock() because it will be difficult to validate. For one thing, until there are a lot of them, manual inspection is quite possible. So feel free
Luckily cond_resched_rcu_lock() will typically only occur within loops, and loops tend to be contained in a single sourcefile. This would suggest a simple static checker should be able to tell withou
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Fri, 26 Apr 2013 09:30:46 -0700
All the way to some other thread? That is a serious escape! ;-) I suspect that your cookie and my counter are quite similar. Well, that is why I needed to appeal to compiler magic or an API extension
We had this fix the otherday, because tcp prequeue code hit this check : static inline struct dst_entry *skb_dst(const struct sk_buff *skb) { /* If refdst was not refcounted, check we still are in a
Author: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
Date: Fri, 26 Apr 2013 08:45:47 -0700
I have done some crude coccinelle patterns in the past, but they are subject to false positives (from when you transfer the pointer from RCU protection to reference-count protection) and also false n
While I agree with the sentiment I do find it a somewhat dangerous construct in that it might become far too easy to keep an RCU reference over this break and thus violate the RCU premise. Is there a
Feel free to route this via the networking tree. Note that this change isn't a pure clean-up but has functional effects as well: on !PREEMPT or PREEMPT_VOLUNTARY kernels it will add in a potential co
This avoids the situation where a dump of a large number of connections may prevent scheduling for a long time while also avoiding excessive calls to rcu_read_unlock() and rcu_read_lock(). Cc: Eric D