LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] SYN spiraling between master and slave IPVS balancers

To: Dmitry Akindinov <dimak@xxxxxxxxxxx>
Subject: Re: [lvs-users] SYN spiraling between master and slave IPVS balancers
Cc: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Tue, 5 Mar 2013 08:41:19 +0200 (EET)
        Hello,

On Tue, 5 Mar 2013, Dmitry Akindinov wrote:

> On 2013-03-05 00:39, Julian Anastasov wrote:
> > 
> >     Hello,
> > 
> > On Mon, 4 Mar 2013, Dmitry Akindinov wrote:
> > 
> > > > http://www.ssi.bg/~ja/tmp/0001-ipvs-add-backup_only-flag-to-avoid-loops.txt
> > > 
> > > Thank you very much!
> > > 
> > > It's about ten days since we started to use on our servers kernel 3.8.0-28
> > > (now 3.8.1-30) with your patch applied. So far, there are no signs of SYN
> > > spiraling or other problems with using ipvs balancer nodes on our
> > > multiprotocol real servers. It would be ver nice to have this feature in
> > > the
> > > standard kernels.
> > 
> >     Yes, I'll do it in the next days.
> > 
> >     So, can I add your Tested-by in commit message?
> 
> It would be more right to add German Myzovsky <lawyer@xxxxxxxxx> as the
> tester. I'm just the coordinator.

        ok

> > And I assume your tests were with backup_only=1, right?
> 
> Yes, slave balancers set this flag to 1. In the case of a cluster fail-over
> the newly elected master balancer clears that flag.

        Very good. Note that this flag is ignored if
the backup function is stopped, so there is no need
to clear the flag when switching to master mode.

Regards

--
Julian Anastasov <ja@xxxxxx>

_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>