LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Release new code: Scheduler for distributed caching

To: Joe Cooper <joe@xxxxxxxxxxxxx>
Subject: Re: Release new code: Scheduler for distributed caching
Cc: Julian Anastasov <ja@xxxxxx>, "lvs-users@xxxxxxxxxxxxxxxxxxxxxx" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Thomas Proell <Thomas.Proell@xxxxxxxxxx>
Date: Wed, 25 Oct 2000 12:05:23 +0200 (MET DST)
Hi!

> >         What weighting, we have constant mapping, i.e. packets to
> > one destination are always routed to specific proxy server. This is
> > covered by the rule "the traffic to one dest will be cached only in one
> > proxy server". No way for load balancing, no way for weights. Of course,
> 
> You're missing the point.  You _are_ load balancing with this
> algorithm.  Just because the same sites always get accessed through the
> same servers doesn't mean we aren't going to see 'balanced' usage on
> each cache, which is our goal.  

Right. If you have n caches, then you break the internet into n pieces,
so that every cache has to deal with only (1/n)th of the internet. And
if the pieces have the same size - it's balanced :-) 

> times.  We do _not_ have a choice in the matter.  When a cache fails
> we're going to lose some data...unavoidable.  It would be far stupider
> to accept reduced hit ratio, reduced throughput, and wasted disk space,
> by using a 'balanced' cluster in the sense LVS traditionally balances
> (round robin).

What do you really mean with "balanced"? You mean, a static distribution
can't be really "balanced", because traffic changes? Then, wait for
my hot spot solution. With that, the very most packets are still routed 
by the consistent hashing, only the peaks are replicated on the least
loaded caches.
So, you have 99% static routing with all its advantages and only the 
few huge parts are distributed dynamicly, what will give you a perfect (?)
load balance.

BTW - I was thinking very much about load balancing, and you should
remember that a scheduler can only balance the requests, not really
the load. 100 requests for the linux kernel will do more load to a
cache than 1000 requests of a thumbnail. And the scheduler will think,
that the more loaded cache is less loaded :-(

So - 100% load balance can't be done that way. We have to  implement
an "acceptable" load balance, we can't achieve more :-(

> >         I don't expect equal load balancing for such setups but who
> > knows, we can be surprised in production - we have so many destinations
> > compared to the number of proxy servers :)
> 
> The large number of L4 switches and WCCP routers that accomplish load
> balancing of web caches using a hash table for cache selection (with
> weights) that work _very_ well is proof enough for me that it does
> balance equally, or pretty darn close. 

Depends. In fact, I was a bit disappointed when I saw the results without
weighting and hot spots. But this may depend on the workload used. And
the workload I used was that of a single company. With that you get a
smaller diversity of requested pages than you'll get in the real
implementation. At least, I hope so! 

> needed.  (I should point out that none of the L4 or WCCP routers has any
> features to deal with such things, so I don't think it is a common
> concern.)

Mine will have one :-)

> >         The other disadvantage when using LVS is that we waste memory
> > for connection tracking when it is not needed: LVS does not use different
> > real server for same destination (web server) in these setups with
> > transparent proxy servers.
> 
> True.  Can a NetFilter module do the same thing (hash based balancing)
> without that connection tracking?

Wait, wait. What will be the alternative? Sending every (ack-) packet
through the scheduler,  running a crc16 hasing algorithm and looking up 
in a 65537 element-table.
I don't know how well implemented the connection tracking is, but I think
there's not much difference here.

> >         May be I'm wrong but IMO this new scheduler breaks the load
> > balancing and gives you static mapping. It seems that the advantage is
> > for the users with traffic limits. Of course, this scheduler can be
> > useful when the proxy servers are not loaded from requests because
> > this kind of round-robin is dangerous for busy clusters. But we can
> > see the results only in production (*).
> 
> I think you're underestimating the diversity of addresses being visited
> on a busy web cache cluster, Julian.  We're talking about 1000
> requests/second and up.  That's a lot of different websites, and
> balancing will be very effective with a hashed selection.

In fact, it'll be running soon on a several GBps fiber network here
in France, connecting 3 universities and several enterprises. 

The problem with caches was the little client population. Only few
people were using the same cache, so the possibility of requesting
an already requested site wasn't very high.
Bigger client populations however, resulted in an overloaded cache.

With distributed caching on a high speed network, the cache isn't
overloaded, CAN have a big client population. With that you can
achieve good hit rates and very big diversity of visited addresses which
will do the balancing. The rest will be done by the hot spot solution.

But (*) you're right, it's an experiment in the moment.

> >         Now you guys have to test the new scheduler and to show some
> > comparisons :) How the real servers are loaded, how the external traffic
> > is reduced, you know how to do it :)
> 
> Will do.  I've got some less busy time (I've never got 'free' time ;-)
> over the next couple of weeks, so I'll see if I can get a cluster
> running with Thomas' scheduler.

Unfortunately I'm  not allowed to speak about my testing-results :-(
But I wouldn't proudly set my name in the header when I thought
it was bullshit!

:-)


Thomas



<Prev in Thread] Current Thread [Next in Thread>