Julian Anastasov wrote:
>
> Hello,
>
> On Tue, 24 Oct 2000, Joe Cooper wrote:
>
> > Julian Anastasov wrote:
> >
> > While I haven't tried this (or Thomas' code), this leaves the question
> > of weighting completely unanswered. The hash based scheduler written by
> > Thomas also doesn't have weighting...but it should be a relatively
> > simple matter to add it. I don't see that simple routing lends itself
> > very well to the kind of flexibility that a hash based scheduler can.
> > But I may be wrong.
>
> What weighting, we have constant mapping, i.e. packets to
> one destination are always routed to specific proxy server. This is
> covered by the rule "the traffic to one dest will be cached only in one
> proxy server". No way for load balancing, no way for weights. Of course,
You're missing the point. You _are_ load balancing with this
algorithm. Just because the same sites always get accessed through the
same servers doesn't mean we aren't going to see 'balanced' usage on
each cache, which is our goal. Getting the same data on to and off of
every cache is not balancing--it's wasted redundancy and wasted
bandwidth--when talking about web caches.
> you can add the weight in the proxy server selection, i.e. something
> like to support routes with different weights. But if you change the
> weight you break the rule. For example, in my proposed setup you
> can assign weights by marking the packets with more than one marker
> going to one proxy, i.e.:
>
> fwmark 1 -> squid1
> fwmark 2 -> squid2
> fwmark 3 -> squid3
> fwmark 4 -> squid4
> fwmark 5 -> squid1
> fwmark 6 -> squid2
> fwmark 7 -> squid1 *
> fwmark 8 -> squid2 *
>
> But I agree, this is ugly.
Yep. Pretty ugly. And gets uglier when you begin to think of web
caches that all have different capacities (throughput and storage). For
example, if someone has a cache online that can provide 180 reqs/sec and
another that supports 110, and adds two more that support 140 and
260...things have gotten real complicated. Get out the calculator and
start writing a whole mess of rules. This isn't being silly, I know of
several people running cache clusters with similarly diverse system
capacities, and modern L4 switches for balancing caches can deal with
it.
> The problem in all these setups is that the above rule
> breaks the load balancing. When one proxy server is replaced we
> are going to fetch every request from the remote servers. We can't
> have both load balancing and static mapping at the same time. If
> you start to change the weights in LVS the content will be shared
> in all proxy servers. This break your definition (the only advantage
> when using the new scheduler).
It doesn't break load balancing to be able to shift weights in order to
take one cache out of service or put a new one in. It will lead to some
repeat of data and a few misses to account for the old cache leaving,
but that has to be done. But changing weights or caches will only
happen when one or more caches fail or need to be upgraded or a new
cache needs to be added. That only happens once or twice a year, if
that much. We can accept having a little repeated data those two
times. We do _not_ have a choice in the matter. When a cache fails
we're going to lose some data...unavoidable. It would be far stupider
to accept reduced hit ratio, reduced throughput, and wasted disk space,
by using a 'balanced' cluster in the sense LVS traditionally balances
(round robin).
> I don't expect equal load balancing for such setups but who
> knows, we can be surprised in production - we have so many destinations
> compared to the number of proxy servers :)
The large number of L4 switches and WCCP routers that accomplish load
balancing of web caches using a hash table for cache selection (with
weights) that work _very_ well is proof enough for me that it does
balance equally, or pretty darn close. There might occasionaly be hot
spots, but those can probably be accomodated for in the scheduler if
needed. (I should point out that none of the L4 or WCCP routers has any
features to deal with such things, so I don't think it is a common
concern.)
> The other disadvantage when using LVS is that we waste memory
> for connection tracking when it is not needed: LVS does not use different
> real server for same destination (web server) in these setups with
> transparent proxy servers.
True. Can a NetFilter module do the same thing (hash based balancing)
without that connection tracking?
> > > Yes, it is simple even without using LVS :) You can even
> > > create netfilter module for 2.4 to mark the packets in different
> > > way (hashes, etc). Then it is a plain routing.
> >
> > A netfilter module might be a good way to handle the issue. What are
> > the advantages to this, as opposed to using the LVS as a base? Is there
> > a compelling reason LVS isn't the right way to achieve web cache
> > balancing, while netfilter is?
>
> For the balaning, see above. For the module, this module will
> only mark the packets. May be only the NAT mode can be problematic
> but are the proxy servers after NAT box really? Because considering the
> forwarding methods LVS can give you only LVS/NAT method as a bonus
> compared to the other variants with plain routing. Because the demasq
> is tricky to run without LVS. But you can use tunneling and direct
> routing even without LVS.
NAT is a pretty important feature. Not all web caches run Squid or
Linux. Though we can address the ARP problem via other methods, it's
probably nice to be able to operate the same way as the proprietary
vendors in the same arena.
> > I claim ignorance in both regards. I know that LVS already does most of
> > what is needed for web cache balancing (and Thomas' code adds most of
> > what was missing)...What would be needed to write such a netfilter
>
> May be I'm wrong but IMO this new scheduler breaks the load
> balancing and gives you static mapping. It seems that the advantage is
> for the users with traffic limits. Of course, this scheduler can be
> useful when the proxy servers are not loaded from requests because
> this kind of round-robin is dangerous for busy clusters. But we can
> see the results only in production.
I think you're underestimating the diversity of addresses being visited
on a busy web cache cluster, Julian. We're talking about 1000
requests/second and up. That's a lot of different websites, and
balancing will be very effective with a hashed selection.
> > module in order to compare the difference performance and managability
> > wise? Got any good links for documentation on the subject?
>
> Now you guys have to test the new scheduler and to show some
> comparisons :) How the real servers are loaded, how the external traffic
> is reduced, you know how to do it :)
Will do. I've got some less busy time (I've never got 'free' time ;-)
over the next couple of weeks, so I'll see if I can get a cluster
running with Thomas' scheduler.
--
Joe Cooper <joe@xxxxxxxxxxxxx>
Affordable Web Caching Proxy Appliances
http://www.swelltech.com
|