Thomas Proell wrote:
>
> On Thu, 2 Nov 2000, Wensong Zhang wrote:
>
> > Have you thought about the algorithm that I posted? It aims for both
>
> Yes. We both map IP-addresses into an array by using a hash-function.
> The main difference between our schedulers is, what we write into
> the array.
>
> You write the address of the least loaded cache, e.g. you associate
> the address with the least connected cache.
>
> In my implementation, the entry will depend only on the result of
> the hashing of the ip-address.
>
> In the beginning, your idea seems to be the better, because it shows
> a much better load distribution. But your solution is unusable in
> the environment of my scheduler.
> There, you'll have a redirector in Paris, one in Cannes and at
> least two others. They are using the same caches. The client
> population is huge, which is very important for the performance
> of the caching system.
> Assume, the redirectors are running for a week now. What will happen,
> when peoplerequest the same new page from all 4 redirectors?
>
> Each redirector sends the request to that cache, that is the least
> loaded from its point of view.
> So, the locality is very poor using several redirectors.
>
> The next problem is that you won't have "new" addresses after a
> few weeks. When I used a 65536 slot hashtable, half of the
> slots were used after one day. Let it work with a great client
> population and you'll soon have every slot!="NULL".
> So, new incoming requests are not sent to the least loaded cache
> as the origin idea was, because they'll be mapped into a
> hashtable-slot that is aleready used by another address (or, as
> I experienced, used by 5 other addresses). You'll have something
> like static routing then.
> Only if the cache is very loaded, you use another cache -- I
> was calling this the "hot spot solution".
> I'm only distributing the hot spots among the leat loaded caches,
> you'll distribute all.
>
> So, for getting "new" addresses even after months, you'll need
> an aging-function to consider hashtable-entries that weren't used
> for some time as "unused"="NULL".
Julian and Joseph Mack and I ran through this discussion in gruesome
detail offlist, with me trying to prove that any kind of memory based
persistence selection would fail in a highly loaded web cache
environment in a very short time. I don't think I ever convinced Julian
that this isn't the same as static routes or that a new selection
algorithm was needed, but I'll be happy to collect all of those mails
and send them to anyone who is interested.
The fact is, as Thomas has tried to explain, that the best solution to
the web cache balancing problem _is_ a hash based selection algorithm.
Because of the unique load presented by web caches, nothing else makes
sense.
Everyone needs to realize that a large web cache cluster will see
millions of unique websites every single day! Even small (tiny) web
caches go through 100000 unique URL's each day...that's an awful lot of
IP's to keep up with to maintain persistence. And it also doesn't
address the problem mentioned by Thomas of multiple caches being
balanced by different balancers (good point Thomas). Finally, some
objects remain in the cache for weeks, while others for only days. The
ones that are there for only a few days can probably be kept up with,
assuming you're balancer has enough memory and horsepower to maintain
several million entries in the persistence pool, but the bread and
butter of web caches...the navigation images that never change and keep
getting served from the web cache...they'll end up spread across all
caches. Wasting that much bandwidth needlessly.
> So, I don't see disadvantages of my scheduler (with hot spot solution).
I agree, Thomas. Web caches are a very different animal, and they need
a different solution than other types of server.
> > 1. I don't think the algorithm is good. :)
>
> I do :-)
I agree.
> > 2. The scheduler can only work for one service. Scheduler are usually
> > bound to several services.
>
> I don't understand that very well. Can you explain?
I think Wensong wants the same scheduler to be able to balance multiple
services. That just won't do for a web cache balancer... It simply must
be a hash based selection. I can't think of any other reasonable way to
do it.
> > 3. Hash method is not good.
>
> Like 1?
I think he means the actual hash. I don't know enough about hashes to
counter. If that's a problem, it is easily solved...there are about a
million good hashes out there in the public domain.
> > 4. code is not neat. :)
>
> THAT is true. But I don't waste energy into a project that people
> don't want. So, I released it to start a discussion and to see the
> feedback, if it's worth improving. Moreover, I'll need some help
> for some details.
Do feel free to waste energy on it, Thomas. Hopefully, Wensong will see
the reasons we are arguing in favor of such a selection method...and
will be willing to help make it happen in the official LVS.
My .02
--
Joe Cooper <joe@xxxxxxxxxxxxx>
Affordable Web Caching Proxy Appliances
http://www.swelltech.com
|