Re: [lvs-users] thoughts on making IPVS sync multicast address adjustabl

To: Phillip Moore <pdm@xxxxxxxxx>
Subject: Re: [lvs-users] thoughts on making IPVS sync multicast address adjustable
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx, lvs-devel@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <ja@xxxxxx>
Date: Wed, 1 Jul 2015 10:33:44 +0300 (EEST)

On Tue, 30 Jun 2015, Phillip Moore wrote:

> Has any thought been giving to making the multicast address IPVS sync
> daemons use to sync state be configurable?
> is in the link local multicast subnet. Our hope is to have
> the IPVS directors be distributed in multiple L3 networks so the
> multicast traffic would have to  be routed between them. This wouldn't
> be a problem except for the choice of this link-local subnet is
> treated specially by network vendors. Routing devices do not want to
> route this and according to my network folks it would cause it to hit
> the control plane and cause excessive usage on the routers where an
> alternate multicast subnet would not.
> Looking at the code it would appear some what trivial to change this
> to a proc setting but I do not know the context or history of the
> choice behind as the address to be hard coded.
> Interestingly the IANA assignments document shows the .81 address as
> being in a reserved range.
> Our configuration idea is to have a small pool of directors all acting
> as master and backup (running both daemons). We will anycast with
> BGP/exabgp the VIP addresses to live on all of the directors which
> would live on several subnets.
> I appreciate any thoughts or feedback on this idea or experiences with
> using the existing address at scale.

        I planned a change for sync code to configure message
size. But waited the merge window, so I can try this weekend
to implement also this feature: IP, PORT, TTL, the being default. As Simon suggests, may be
with new attributes for the IPVS_CMD_ATTR_DAEMON message.
I can try it to support IPv6 too.


Julian Anastasov <ja@xxxxxx>

Please read the documentation before posting - it's available at: mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to

<Prev in Thread] Current Thread [Next in Thread>