In practical experience, under what kind of real server load (cpu / hard disks
/ network) does an end user/client 'feel' the slow down (any service, http
especially)? ie. is it the same for a server as for a desktop machine where we
feel the 'slow down' when our cpu is up in the 80%+ usage (and similarly for
the hard disk (60%?) and network (80%?) -- I'm just throwing in values off the
top of my head.
The reason for the question is I'm trying simulate a network with LB and RSvrs.
And I need an intuitive cut off point as to when a real server should be
considered 'over loaded' and thus should be left alone for a while.
Please reply directly to me, as this may or may not be of interest to all.
Ben
--
B. http://b.makelinux.org/ "Of course there is a reason why!"
__________________________________________________________________________
If you live long enough, you'll see that every victory turns into a defeat.
-- Simone de Beauvoir
|