On Tue, 2007-12-18 at 22:27 +0100, Jan-Frode Myklebust wrote:
> On 12/18/07, Christopher Barry <christopher.barry@xxxxxxxxxx> wrote:
> > Greetings everyone,
> > I have googled around, and the closest thing I can find is described at the
> > below link, although not completely answered. I'm hoping that someone can
> > point me to a howto or other doc that will get me over this hump.
> > http://lists.graemef.net/pipermail/lvs-users/2003-June/008879.html
> That poster is using GPFS, which in it's latest release natively
> support clustered NFS.
> I don't think there are other stable/non-beta ways of doing cnfs on
> linux. I've tried failover-NFS with linux-ha.org, but even that was
I'm comfortable with the locking aspect, as GFS will take care of any
locking for me. That, and it will only be homes I'll be exporting from
the nodes. I'm also not all that interested in any stateful failover -
the user should be able to reconnect (onto a different RS) after the
automount expires. The performance may not be stellar, but I'm ok with
that for this application.
My primary question, and indeed the wall I'm hitting is how to
effectively get the mount request through the director, to a realserver,
and back. I have these 6 nodes currently balancing ssh, xdmcp, and
vnc-to-gdm without issue. I had thought the clustername param to statd
would have fixed it - but alas.
I did just have a look at the FAQ, and it mentions that the nfs for
single-port is using udp - not tcp. Is this an issue?
I'm sure this can work - I just need the magical incantation...
Anyone else feeling magical?