LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

RE: Announcing Kimberlite

To: "'Feldman, Jim'" <Jim.Feldman@xxxxxxxxxx>, "lvs-users@xxxxxxxxxxxxxxxxxxxxxx" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: RE: Announcing Kimberlite
From: Nicolas Huillard <nhuillard@xxxxxx>
Date: Tue, 27 Jun 2000 10:26:23 +0200
Since I'm looking for products (not only principles, even thought they are 
very interesting and help evaluate products), can you point us to other 
good (in your opinion) "REAL shared disk" file systems ? (in production 
quality, on Linux preferably)

Nicolas Huillard

-----Message d'origine-----
De:     Feldman, Jim [SMTP:Jim.Feldman@xxxxxxxxxx]
Date:   mardi 27 juin 2000 00:07
A:      lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Objet:  RE: Announcing Kimberlite

I'm pretty much a lurker here, but I do need to throw my 2 cents in.

1. Tim, your online docs in your CVS repository, are riddled with "Internal
use only", pre OS/GPL banners.  At this point, I can't say if I've done
anything wrong by reading them.

2. For anyone interested in REAL shared disk io, multi-host , high
availability clusters, required reading should be "VMS Internals and Data
structures" by Ruth Goldenberg and Larry Kenah.  If you can find Roy Davis'
Vaxclusters Principles, it's a nice add on.  It just pains me to watch the
Linux Community re-inventing concepts that were hashed out nearly 14 years
ago. Yes, I know that VMS clustering suffered from serious code bloat over
the years, and it depended heavily on the spinlock concept, IPL's 
(interrupt
priority levels), AST's (Async system traps) and record level locking just
above the filesystem, but the quorum concepts are still pretty valid, and
you can fake a bit of the rest, even in Intel's arch (Dave Cutler did it 
for
the NT kernel).  Coordinating simultaneous reads/writes between multiple
hosts to a shared filesystem is non-trivial, as several projects are 
finding
out.

jim

-----Original Message-----
From: Tim Burke [mailto:burke@xxxxxxxxxxx]
Sent: Sunday, June 25, 2000 10:03 PM
To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Announcing Kimberlite


Just wanted to give you all a heads up that Monday morning we here at
Mission Critical Linux will be GPL-ing our cluster software called
Kimberlite.  This is a complete high availability failover
infrastructure.   Check it out at oss.missioncriticallinux.com.

Some of the key attributes of the implementation:
- Provides high data integrity guarantees, insuring that a service is
only running on a single cluster member at any time.
- Uses commodity hardware.
- Implemented primarily as user space daemons
- Distribution independent
- Extensive end user documentation and design specification
- Shared storage architecture (i.e. SCSI or FibreChannel)
- Quorum based membership algorithm
- I/O fencing by way of remote control power modules (known
affectionately
  in this forum as STONITH)
- Targeted at supporting applications such as databases, NFS services,
etc.

We are really psyched to have gotten approval to open up the project to
the open source community.  The code provided here is fully
operational.  Please check it out, and we look forward to your
participation.

Kimberlite and LVS are very complimentary.  Specifically, LVS is great
as the front-end HTTP server tier, but doesn't address the needs of a
high availability storage tier necessary for dynamic content.  That's
where Kimberlite comes in.





<Prev in Thread] Current Thread [Next in Thread>