Re: What are people doing for data sharing between real servers

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: What are people doing for data sharing between real servers
From: Jan Klopper <janklopper@xxxxxxxxx>
Date: Fri, 15 Jul 2005 16:29:20 +0200

It might be advisable to set the options on your filesystem not to record the last acces time of each file. Microsoft has used this "undocumented" feature of its registry to frequently skew studies in favor of IIS/windows file sharing by decreasing the load on the win box compared to the linux box.

Where and how this feature can be enabled? don't ask me, google's yur friend.


kwijibo@xxxxxxxxxx wrote:

Jan Klopper wrote:

On the other hand you might consider changing to reiserFS which is capable of reading every file on your FS within 1 or 2 clock cycli, independant of the size of the parent dir.

Im currently reading cache files of ~4-9K from a dir containing well over 12000 of them without any speed problems on a single p3 1ghz with a scsi 2 hdd in raid 0.

Besides, hasing also requires you to change most of your scripts which will also (if done for webcontent) increase the load dramatically.


Or you could just use ext3 with htree enabled.  My confidence using
ReiserFS with NFS is not that high.

In my experience it is a combination of large directories and
NFS that melts the backend storage.  During busy times all
of my realservers would have horribly slow access to files on the
storage box but if I was on the storage box itself it would still
be pretty snappy.  This box was a quad processor box that had a
12 disk SCSI RAID5 so it wasn't lacking for power.  I could see
processes were getting stuck when accessing large directories.

You could also possibly improve your performance by putting your
FS journal into NVRAM.  I think there may also be a way to cache
NFS in NVRAM as well but I didn't investigate it that far before
my problems got fixed.

_______________________________________________ mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to

<Prev in Thread] Current Thread [Next in Thread>