LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: ideas about kernel masq table syncing ...

To: Ratz <ratz@xxxxxx>
Subject: Re: ideas about kernel masq table syncing ...
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Kyle Sparger <ksparger@xxxxxxxxxxxxxxxxxxxx>
Date: Wed, 9 Aug 2000 09:02:07 -0400 (EDT)
> I don't think this is the whole truth.

No, it's not.  If you have box that either listens to you, or has a
clever BIOS, then you can easily fill up every PCI slot with a NIC.  But,
if you pull a $100 gigabyte off the shelf, then you'll start hitting the
limit at around 4.  It's especially fun when the BIOS tries to share a
NIC's IRQ with your SCSI adapter's, but that's a story that probably
doesn't need to be told here. :)

> somebody really has to write to code and test it :)

Okay, I'll agree that it has to be proven before you know for a fact, but
still, some things should just be common sense -- on a network, sending
one packet multicast to N nodes is faster than sending N packets unicast
to N nodes.  And if it's not, you've got problems you should probably fix
;)

> I mean, a lot of commercial loadbalancing products 
> use f.e. the parallel port as a syncing device.

True, but that doesn't mean it's better.  Use a point-to-point setup, and
you've limited yourself to two nodes.  I just don't see what the advantage
of doing that is.

> Do you really plan such a huge setup and could you
> imaging an application handling all this stuff?

No, I really don't.  Not today, not tomorrow, but, like I said before:
Why limit yourself?  What is so advantageous about using a parallel port
that it's worth limiting yourself to two directors?

> 7 directors put in order that the front director gets the request and
> loadbalances it to the next line where there are two directors.  They
> loadbalance the request to the next 4 directors and they finally
> loadbalance it amoung the real server.

> But I still can't really see the advantage of such
> a setup.

The only situation where I can see that being advantageous is if the very
first one uses Direct Routing, then the next ones use one of the slower
options (NAT/Tunnelling).  But that's not the application I'm thinking
about anyway.

Imagine you have a 5 server web farm.  At any given time, one of these is
a director only, not a web server.  It receives incoming traffic, and then
redirects it to the other 4 web servers.  It also notifies the other 4
web servers of what it's up to.  If the current director fails, any of the
given four is ready to pick up the slack, and if that one fails, any of
the remaining three can pick up the slack, and so on down the line.

The main reason I'm advocating the ethernet/multicast is because you have
the _capability_ of using many directors, if for some reason you have
the need -- and it should be more efficient than point to point in N
director situations.

Again, it all goes back to the statement:  Why limit yourself?  That's all
I'm asking.  What's the advantage of point to point, such that it's worth
limiting what you can do with the software in the future? :)  Is it easier
to implement?  Is it technically superior in some way I'm missing?  Do you
just like it better?  You're coding this, not me -- so any of those is a
100% legitimate answer :)

> I'm interested in what you think about, but I'm not
> sure if we should continue this discussion in
> this list (thread is getting a bit long)

I don't know, if I were listening in, I'd find this thread interesting,
but, if anyone wants me to stop, let me know -- I'll take it off-list to
direct emails immediately.

Thanks,

Kyle Sparger - Senior System Administrator
Dialtone Internet - Extremely Fast Web Systems
(954) 581-0097 - Voice (954) 581-7629 - Fax
ksparger@xxxxxxxxxxxxxxxxxxxx
http://www.dialtoneinternet.net



<Prev in Thread] Current Thread [Next in Thread>