LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: A little OT (sort of)

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: A little OT (sort of)
From: jsc3@xxxxxxxxxxxxx (John Cronin)
Date: Sat, 15 Sep 2001 14:49:40 -0400 (EDT)
> I had mentioned in an earlier message a television commercial from IBM about 
> linux. Well, this article:
> http://consultingtimes.com/Serverheist.html
> contains a link to an mpeg of the commercial. If you would rather go directly 
> to the video, it's at:
> http://www.consultingtimes.com/media/heist60.mpeg
> 
> While this may seem to be off topic for this list, I'm not so sure. I refer 
> to the comparisons between a monolithic "mainframe" (or minicomputer) type of 
> system vs. a load balanced system of microcomputers (LVS). The article at the 
> link above gives some starting point to work from, and about the various 
> angles and pros and cons of each type of system.
> 
> Obviously, the comparison in that article includes the pricing of Microsoft 
> licenses, which don't apply to any comparisons of various Linux 
> implementations. Anyway, I mention it because as a consultant, I have to keep 
> abreast of the pros and cons of various solutions for my clients, and these 
> are two different approaches to handling heavy loads. I'm already wondering 
> about the abilities of the software running on monolithic systems, i.e., 
> splitting the number of connections to Apache to different real-servers vs. 
> running Apache on a minicomputer.

My take on this is that the mainframe approach only makes any sense at
all if you are already serving several million page views per day (in
the case of the web server).  For example, in the article, the mainframe
made a LOT more sense for serving 50,000 mail clients than for 5,000
mail clients.  Also, the initial cost for a mainframe is expensive, as
is the recurring cost of the support contract.  And if you do run out
of capacity on the mainframe, the incremental cost for more capacity
(ie another frame) is tremendous.   One of the costs that went up the
most in the paper you refer to above is the cost of racking all the
servers.  These days, you can achieve very high density in a single
rack, using 1U servers, or 2U if you need more PCI slots.  It is not
hard at all to get 12-44 servers in a single rack (depending on the
servers and the racks), though cooling these can become a problem.

Also, keep in mind, that in the end, for really huge jobs, clustering
is the ONLY way to go.  IBM does that with their ASCII White supercomputer,
and all the animation shops go with render farms (ie Beowulf) rather
than large individual compute servers.  Google uses 8,000 (at last
count) servers for their web search engine site.

In summary, yes, there are some situations where using a large mainframe
does make sense, but these situations ALWAYS involve a large budget.  With
clusters, you can start out small and get larger, and in the end, if you
get huge, you will need to use a cluster anyway.

-- 
John Cronin
mailto: `echo NjsOc3@xxxxxxxxxxx | sed 's/[NOSPAM]//g'`


<Prev in Thread] Current Thread [Next in Thread>