Hello Horms,
It sounds entirely likely that there is a race in there somewhere,
though its probably going to be quite hard to find exactly why
an entry is being hashed that is already hashed. I'll poke around
Where is it hashed? Isn't it dereferenced and removed from the cp?
myself, but if anyone else wants to do likewise, please do.
I had a quick glance but wasn't enlightened. It's strange because I
happen to use the exact same kernel on the exact same box for one of our
customers as well. It does not trigger for me.
In the mean time, a non-SMP kernel would be an excellent way to go.
For sure ;).
Firstly, it may well eliminate the problem, or conversely show us that
its not an SMP locking issue.
I wonder what he's testing right now? :). I might be interested in
getting a report with frame pointers enabled. My first thought was a
locking issue related to software interrupts. The list write locking is
not IRQ safe, so I wondered if something along the lines of the
following helped? (granted, the window of opportunity is small):
index 015c819..e4ad8f8 100644
--- a/net/ipv4/ipvs/ip_vs_conn.c
+++ b/net/ipv4/ipvs/ip_vs_conn.c
@@ -62,6 +62,9 @@ static atomic_t ip_vs_conn_no_cport_cnt
/* random value for IPVS connection hash */
static unsigned int ip_vs_conn_rnd;
+/* Save softIRQ flags */
+static unsigned long flags;
+
/*
* Fine locking granularity for big connection hash table
*/
@@ -98,6 +101,16 @@ static inline void ct_write_unlock(unsig
write_unlock(&__ip_vs_conntbl_lock_array[key&CT_LOCKARRAY_MASK].l);
}
+static inline void ct_write_lock_irq(unsigned key)
+{
+
write_lock_irqsave(&__ip_vs_conntbl_lock_array[key&CT_LOCKARRAY_MASK].l,
flags);
+}
+
+static inline void ct_write_unlock_irq(unsigned key)
+{
+
write_unlock_irqrestore(&__ip_vs_conntbl_lock_array[key&CT_LOCKARRAY_MASK].l,
flags);
+}
+
static inline void ct_read_lock_bh(unsigned key)
{
read_lock_bh(&__ip_vs_conntbl_lock_array[key&CT_LOCKARRAY_MASK].l);
@@ -142,7 +155,7 @@ static int ip_vs_conn_hash(struct ip_vs_
/* Hash by protocol, client address and port */
hash = ip_vs_conn_hashkey(cp->protocol, cp->caddr, cp->cport);
- ct_write_lock(hash);
+ ct_write_lock_irq(hash);
if (!(cp->flags & IP_VS_CONN_F_HASHED)) {
list_add(&cp->c_list, &ip_vs_conn_tab[hash]);
@@ -155,7 +168,7 @@ static int ip_vs_conn_hash(struct ip_vs_
ret = 0;
}
- ct_write_unlock(hash);
+ ct_write_unlock_irq(hash);
return ret;
}
@@ -172,7 +185,7 @@ static int ip_vs_conn_unhash(struct ip_v
/* unhash it and decrease its reference counter */
hash = ip_vs_conn_hashkey(cp->protocol, cp->caddr, cp->cport);
- ct_write_lock(hash);
+ ct_write_lock_irq(hash);
if (cp->flags & IP_VS_CONN_F_HASHED) {
list_del(&cp->c_list);
@@ -182,7 +195,7 @@ static int ip_vs_conn_unhash(struct ip_v
} else
ret = 0;
- ct_write_unlock(hash);
+ ct_write_unlock_irq(hash);
return ret;
}
And secondly, if your box is primarly an
Linux Directory, it will probably perform slightly better with non-SMP
than SMP.
Cheers mate,
Roberto Nibali, ratz
--
echo
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
|