>From jsc3 Thu Oct 19 11:40:27 2000
Subject: Re: Files System for Clusters
To: mack.joseph@xxxxxxx (Joseph Mack)
Date: Thu, 19 Oct 2000 11:40:27 -0400 (EDT)
In-Reply-To: <39EF0E18.78BF844F@xxxxxxx> from "Joseph Mack" at Oct 19, 2000
11:07:04 AM
X-Mailer: ELM [version 2.5 PL1]
Content-Length: 1621
> > > But NFS does not help with redundancy does it? What if the NFS server goes
> > > down?
> >
> > Have a second NFS server on standby,
>
> someone else on this list is the expert on this...
>
> the file handles that nfs generates are derived from the location of the file
> on
> the disk.
> File handles for the same file on different disks/machines will be different.
> When you failover nfs backend-servers, the clients will get stale file
> handles.
A umount/remount would fix that, but not be transparent. I can
tell you for certain that using dual-attached storage does NOT
have the same problem, since the same disks are being used in that
case. I just did it at the Linux Showcase here. Both Samba and
NFS were failed over quite transparently when using a dual initiator
SCSI RAID box. Unfortunately, those kinds of systems are quite
expensive - I believe the vendor who loaned us the unit said they
like to get 40-60,000 dollars for that particular unit, which was
quite a bit more than the rest of our hardware (Intel 460T switch,
Intel 550T switch, four Intel 1U servers, three Intel 2U dual-processor
servers with hot swap SCSI, and two 14U Compaq racks). You should
be able to get some kind of dual attached storage for less than
that, though; a Sun D1000, for example. In that case, with a JBOD
configuration, you would be forced to use software RAID, which makes
things much trickier. You definitely want to use a journaling
filesystem though - we just used ext2 and did a mount without an fsck
for our demo, but in a production HA service that would be completely
unacceptable.
--
John Cronin
--
John Cronin
|