LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: ideas about kernel masq table syncing ...

To: Kyle Sparger <ksparger@xxxxxxxxxxxxxxxxxxxx>
Subject: Re: ideas about kernel masq table syncing ...
Cc: "lvs-users@xxxxxxxxxxxxxxxxxxxxxx" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Ratz <ratz@xxxxxx>
Date: Wed, 09 Aug 2000 20:57:45 +0200
Hi Kyle,

Kyle Sparger wrote:
> 
> > I don't think this is the whole truth.
> 
> No, it's not.  If you have box that either listens to you, or has a
> clever BIOS, then you can easily fill up every PCI slot with a NIC.  But,
> if you pull a $100 gigabyte off the shelf, then you'll start hitting the
> limit at around 4.  It's especially fun when the BIOS tries to share a
> NIC's IRQ with your SCSI adapter's, but that's a story that probably
> doesn't need to be told here. :)

Buying a el-cheapo motherboard and having SCSI disk is IMHO
saved money at the wrong place. And in every decent BIOS you
can turn off SCSI. If you need SCSI, you don't really care
about a cheap motherboard so this limit is rather virtual :)
 
> > somebody really has to write to code and test it :)
> 
> Okay, I'll agree that it has to be proven before you know for a fact, but
> still, some things should just be common sense -- on a network, sending
> one packet multicast to N nodes is faster than sending N packets unicast
> to N nodes.  And if it's not, you've got problems you should probably fix
> ;)

Oh, definitely, but handling the synchronization in such an
approach is rather difficult but solveable.
 
> > I mean, a lot of commercial loadbalancing products
> > use f.e. the parallel port as a syncing device.
> 
> True, but that doesn't mean it's better.  Use a point-to-point setup, and

Absolutely correct, only managers tend to argue like that.

> you've limited yourself to two nodes.  I just don't see what the advantage
> of doing that is.

This is not an advantage, it's in my opinion a practical
approach (although I'm not so sure anymore :). I just
can see the reason of having 6 directors, 1 actively 
balancing and the rest waiting for a failover because
in this case Murphy enters: Your setup will never fail
because it would not be bad. If you however would run
only a single director node, you could be sure it 
crashes.

> > Do you really plan such a huge setup and could you
> > imaging an application handling all this stuff?
> 
> No, I really don't.  Not today, not tomorrow, but, like I said before:
> Why limit yourself?  What is so advantageous about using a parallel port
> that it's worth limiting yourself to two directors?

I don't want to limit, in my design it would certainly 
be possible to configure the method you want to sync
lets say through some /proc entries.
 
> > But I still can't really see the advantage of such
> > a setup.
> 
> The only situation where I can see that being advantageous is if the very
> first one uses Direct Routing, then the next ones use one of the slower
> options (NAT/Tunnelling).  But that's not the application I'm thinking
> about anyway.
> 
> Imagine you have a 5 server web farm.  At any given time, one of these is
> a director only, not a web server.  It receives incoming traffic, and then
> redirects it to the other 4 web servers.  It also notifies the other 4
> web servers of what it's up to.  If the current director fails, any of the
> given four is ready to pick up the slack, and if that one fails, any of
> the remaining three can pick up the slack, and so on down the line.

wow, amazing! But really really hard to achieve. A lot of 
intelligence has to be programmed. We could combine it with
fuzzy logic. Dreams, but probably ...

> The main reason I'm advocating the ethernet/multicast is because you have
> the _capability_ of using many directors, if for some reason you have
> the need -- and it should be more efficient than point to point in N
> director situations.

But you still have the problem that one director must
be the chief, if you want to sync to tell the current
status.
 
> Again, it all goes back to the statement:  Why limit yourself?  That's all
> I'm asking.  What's the advantage of point to point, such that it's worth
> limiting what you can do with the software in the future? :)  Is it easier
> to implement?  Is it technically superior in some way I'm missing?  Do you
> just like it better?  You're coding this, not me -- so any of those is a
> 100% legitimate answer :)

I completely agree with you, limiting is totally wrong
and also against the linux philosophy. I was probably
not expressing myself correctly in my previous postings.
I'm fully convinced with the ethernet/multicast solution.
I just wanted to know some good reasons why someone
would prefer it over another solution, and thank you
for your very good comments on this.
 
> > I'm interested in what you think about, but I'm not
> > sure if we should continue this discussion in
> > this list (thread is getting a bit long)
> I don't know, if I were listening in, I'd find this thread interesting,
> but, if anyone wants me to stop, let me know -- I'll take it off-list to
> direct emails immediately.

I think as long as we focus on the subjects loadbalancer
and clustering in a HA environment this could be in main
interests for everyone. But I don't care either to switch
over to direct emails. 

thank you and best regards,
Roberto Nibali, ratz 

-- 
mailto: `echo NrOatSz@xxxxxxxxx | sed 's/[NOSPAM]//g'`


<Prev in Thread] Current Thread [Next in Thread>