---------- Forwarded message ----------
Date: Sat, 18 Mar 2000 22:19:57 +0800 (CST)
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
To: Lars Marowsky-Bree <lmb@xxxxxxx>
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: shared-state load balancing (was Re: random SYN-drop function)
On Thu, 16 Mar 2000, Lars Marowsky-Bree wrote:
> > I just thought an idea on transfering the state table, it might be good.
> > We run a SendingState and a ReceivingState kernel_thread (daemons inside
> > the kernel like kflushd and kswapd) on the primary IPVS and the backup
> > respectively. Everytime the primary handles packets, it will put the
> > change of state in a sending queue. The SendingState kernel_thread will
> > wake up every HZ or HZ/2, to send the change in the queue to the
> > ReceivingState kernel_thread through UDP packets, and clear the queue
> > finally. The ReceivingState receives the packets and changes its own state
> > table.
>
> This is good, but not good enough.
>
> You need to handle the special case when the other machine has just come up
> (full copy of the state table, and then parse all updates which happened since
> the start of the sync and the end of the initial transfer).
>
Sure, the bootstrap/startup of the backup should be considered in the real
systems. IMHO, parsing all updates into a big table of millions of entries
is time-consuming, that's why I suggested to use a update queue.
> And you don't want to implement this for LVS as a special case, but as a
> general case for the Netfilter framework, which should save a lot of duplicate
> work.
>
I just got the idea of flushing states to the backup using kernel_thread
like kflushd. I don't have plan to implement it now. ;-) Anyone who is
interested in it, please take time to do some investigation and coding
too.
Thanks,
Wensong
|