LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: help (web clustering/bandwidth limiting on a particular group of URL

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: help (web clustering/bandwidth limiting on a particular group of URLS) (fwd)
From: Lynn Winebarger <webmaster@xxxxxxxxxxxxxx>
Date: Wed, 11 Apr 2001 23:49:22 -0600 (MDT)
previous reply I sent from my personal email address rather than the one
subscribed to the list (on sunday):

On Sun, 8 Apr 2001, Joseph Mack wrote:

> On Sun, 8 Apr 2001, Lynn Winebarger wrote:
> 
> >   The external network is on a T1, but internally they're 100Mbps (we
>  First I'm having a hard time imagining a setup which is 100Mbps, which is
> both performance (ie CPU) limited and bandwidth limited for a T1 line. A
> T1 can be saturated by a single low end machine and you are contemplating
> putting several 100Mbps machines onto this line.
> 
     Well, the cluster is actually intended for spreading around load from
CGI scripts.  We want to get more paying users so we can afford to keep
the free stuff going as well as our own site production, and expand to a
higher bandwidth connection - part of the plan for that is offering CGI
(we don't currently).  I'm trying to build an infrastructure that can
support that goal (there's also the issue of high availability provided by
the clustering).   CGI scripts are required to provide uid/gid
level security in the current (1.3) stable apache release.  While we could
enforce other solutions for ourselves (like FastCGI), enforcing it over
hundreds of users is untenable.
    For current perfomance problem details, see [1] below.
> 
> >   I have an idea for approaching this, but am not sure if it's right
>                                    ^^^^
> I don't know what "this" is (see comment above). 

      Using a cluster with an ability to throttle particular parts of our
website.  Actually, I'd prefer to be able to put a throttle on each
virtual host we serve individually as well, which the below process-based
attempt wouldn't manage.  For that, I'd have to write an Apache module.
Which is feasible, but not as fast as having a pre-existing solution...

> If "this" is throttling bandwidth, I remember discussing this with Horms.
> I'll go look it up tomorrow. What gets throttled, by user IP, by URL
> fetched?
> 
   by URL fetched.  Who's Horms?

> >    Keep in mind, we're a non-profit organization, so we can't buy a
> > machine to handle a full-blown squid server (which was my other thought),
> 
> but that would only increase throughput, which is the opposite of what you
> want?

   Not really.  Ultimately, we'd like more throughput totally, but with an
ability to allocate the bandwidth effectively (or at least keep our users
to reasonable, pre-agreed-upon levels).  I figure if the throttling
happens at the TCP level on the gateway router, then the TCP layer on the
real servers will automatically handle the throttling on behalf of
individual httpd processes, rather than a high-overhead solution like
mod_throttle (particularly one coordinated between several machines).
   The reason I mentioned squid is because it has the ability to do URL
level processing.  If we actually get more paying customers, buying
a machine for squid caching would be feasible.  But more paying customers
is contingent on getting the infrastucture ready to handle the extra
compute-load.
   I really appreciate the help.
Thanks,
Lynn

[1]     I think the current performance problems are coming from the
interaction of mod_throttle settings and apache's process limit.  It's
set up so connections are never refused, just get progressively slower.
These mod_throttle settings are only for the particular URLs, but when
apache hits 200 processes (the limit I've set) the whole site starts
slowing down.  The apache limit is to keep swappage out, because the
kernel we're using has some VM problems that cause crashes. [We're
upgrading.]  The slower vs refuse setting is for the boss' preference as
to site behaviour.   





<Prev in Thread] Current Thread [Next in Thread>
  • Re: help (web clustering/bandwidth limiting on a particular group of URLS) (fwd), Lynn Winebarger <=