Squid might be a good solution for caching static pages on real
servers. For php caching, you can use turck mmcache. It works well
most of the time, but is sometimes flaky with poorly-written php code.
We actually do what Joe suggested, manually rsync'ing content to a cache
directory on the real servers. Alias directives are added to map the
cached content to the proper url location. Our shared storage is nfs,
and we, too, have lots of user-maintained files. However, we've
targeted directories that don't change often (theme & layout directories
for CMS applications, for example). Rsync is efficient, and we run it
every hour or so.
There's one performance problem that's not solved by this, though.
Apache performs lots of stat() calls when serving pages (checking for
.htaccess files). The stat calls are made before the content is served
and go to the nfs servers. Under a high traffic load, the stat calls
bog down our nfs servers despite the content being cached on the real
servers.
Has anyone tried using a distributed fs such as coda to sync content on
real servers?
Thanks,
-jrr
On Mon, 2004-04-26 at 12:13, Joseph Mack wrote:
> If it's a readonly site, then have only static pages on the realservers
> and rsync them from a staging machine (which may have dynamic html).
|