Hi Kyle,
Kyle Sparger wrote:
>
> > How about have n ethernet cards in one machine, could that serve
> > the same purpose?
>
> Still very easily limited, and probably more resource intensive. In most
> PC computers, you'll start having problems once you hit 4 or more ethernet
> cards.
I don't think this is the whole truth. I just set up a box
with 2 Quadboards and 4 NICs, this makes 12 ports and the
work all together. As of now the loadbalancer's load is so
slow and nobody has ever proven a bottleneck, I though about
having one box for multiple netentities. It works fine without
any single problem.
> Sending over one ethernet card would be very simple, too. Just dump the
> information to a broad/multi-cast address, and you only have to send one
> packet, rather than 1 packet per destination.
>
> That's not great for security, I know, but it would reduce the time the
> primary LVS server spends on transmitting updates to 1/n of the amount of
> time that transmitting it to n servers would take. Considering that this
> is supposed to be a _HIGH_ volume, very scalable project, I think that the
> conclusion is simple to come to:
somebody really has to write to code and test it :)
I mean, a lot of commercial loadbalancing products
use f.e. the parallel port as a syncing device. But
actually all commercial loadbalancing products are
extremely limited. -> offtopic
> On one hand, you have no growth in transmission time whatsoever, on the
> other, you have linear growth. If you're getting 10,000 connections a
> second, with say, 4 potential directors, that's 10,000 updates versus
> 40,000. What if you want each server in the cluster to be a potential
> director, and you have 50 of them? Much pain, if you're not
> broad/multi-casting it.
Do you really plan such a huge setup and could you
imaging an application handling all this stuff?
Somehow I still don't like the idea of having more
then 2 potential directors. It just does not make
big sense in my opinion unless you have following
setup: f.e. 7 directors put in order that the
front director gets the request and loadbalances
it to the next line where there are two directors.
they loadbalance the request to the next 4 directors
and they finally loadbalance it amoung the real server.
Did anybody try out something like this?
But I still can't really see the advantage of such
a setup. I mean if you have 4 directors and only
one is actively handling the request and the others
are waiting for a failover, this is rather expensive.
And unless lvs-code is not cluster-aware, that means
all directors work their part of incoming requests,
this setup is thrown out money.
I'm interested in what you think about, but I'm not
sure if we should continue this discussion in
this list (thread is getting a bit long)
Ok, sleeping time, my english is getting
worse and worse, time to have a sleep.
Best regards,
Roberto Nibali, ratz
--
mailto: `echo NrOatSz@xxxxxxxxx | sed 's/[NOSPAM]//g'`
|