LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

[PATCH tcpsp]To avoid SMP race when rmmod

To: <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: [PATCH tcpsp]To avoid SMP race when rmmod
From: "yangrunhua" <yangrunhua@xxxxxxxxxxxx>
Date: Tue, 11 Nov 2003 19:33:53 +0800
The original is not SMP safe, should like this:
static void tcpsp_conn_flush(void)
{
        int idx;
        struct tcpsp_conn *cp;
        struct list_head *e, *nxt;

  flush_again:
        for (idx=0; idx<TCPSP_CONN_TAB_SIZE; idx++) {
                /*
                 *  Lock is actually needed in this loop.
                 */
                write_lock_bh(&tcpsp_conn_lock); //should disable BH on this CPU

                list_for_each_safe (e, nxt, &tcpsp_conn_tab1[idx]) {
                        cp = list_entry(e, struct tcpsp_conn, f_list);
                        TCPSP_DBG(4, "del splicing connection\n");
                        if (del_timer(&cp->timer)){
                        //call del_timer under the protection of write lock of 
                        //tcpsp_conn_lock, or cp maybe already freed
                                write_unlock_bh(&tcpsp_conn_lock);
                                tcpsp_conn_expire_now(cp);
                                goto out_next;
                        //should not continue, b/c nxt maybe already freed by 
                                //TIMER_BH on the other CPU. And continue at 
next idx may be 
                                //good, b/c it will not stick on this same cp 
if its ref count 
                                //is still over 1 under high load, let's 
continue at next idx.
                        }
                }
                write_unlock_bh(&tcpsp_conn_lock);
  out_next:
                continue;
        }

        /* the counter may be not NULL, because maybe some conn entries
           are run by slow timer handler or unhashed but still referred */
        if (atomic_read(&tcpsp_conn_count) != 0) {
                schedule();
                goto flush_again;
        }
}


Regards,
        Runhua Yang


<Prev in Thread] Current Thread [Next in Thread>