LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Announcing Kimberlite

To: Tim Burke <burke@xxxxxxxxxxx>
Subject: Re: Announcing Kimberlite
Cc: "lvs-users@xxxxxxxxxxxxxxxxxxxxxx" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: <atif@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 26 Jun 2000 08:13:47 +0200
Thankyou Tim and mclx for sharing your solution with us.
I attended a business track in zurich last month when someone from MCLX spoke
about their services and mentioned the HA solution.

I have build a few systems with heartbeat and the most difficult part was of the
shared storage, and I really needed to see the sources then!!!!! :)

Since shared storage is a key to HA system (not having single point of failure
and data integrity), I think it will be useful for us if we can share our
experiences with the hardware that supports these systems.

Following is my bad experince that I had with this Raid Controller.
Last I tried was HP NetRaid 8-si (Raid controllers), they are certified for
Microsoft Cluster, and HP has made it very difficult for us to support any other
OS on it.  

The only misfunctional part in those Raid Adapters is that one cannot change the
host ID (SCSI ID) of the card. 

They mention on their website that there is a cluster firmware required for it
called nr8si_clus.exe or something but its impossible to find it anywhere on
their website. I even did a ls -lR on their ftp site and cant find it.

Also tried from Megatrends (the origional manufacturer and no help).
When called for support, they ask what is a Raid Controller and later they dont
want to help at all unless we are using Microsoft Cluster.

Has anyone this firware for me?


Best Regards.




Quoting Tim Burke <burke@xxxxxxxxxxx>:

> Just wanted to give you all a heads up that Monday morning we here at
> Mission Critical Linux will be GPL-ing our cluster software called
> Kimberlite.  This is a complete high availability failover
> infrastructure.   Check it out at oss.missioncriticallinux.com.
> 
> Some of the key attributes of the implementation:
> - Provides high data integrity guarantees, insuring that a service is
> only running on a single cluster member at any time.
> - Uses commodity hardware.
> - Implemented primarily as user space daemons
> - Distribution independent
> - Extensive end user documentation and design specification
> - Shared storage architecture (i.e. SCSI or FibreChannel)
> - Quorum based membership algorithm
> - I/O fencing by way of remote control power modules (known
> affectionately
>   in this forum as STONITH)
> - Targeted at supporting applications such as databases, NFS services,
> etc.
> 
> We are really psyched to have gotten approval to open up the project to
> the open source community.  The code provided here is fully
> operational.  Please check it out, and we look forward to your
> participation. 
> 
> Kimberlite and LVS are very complimentary.  Specifically, LVS is great
> as the front-end HTTP server tier, but doesn't address the needs of a
> high availability storage tier necessary for dynamic content.  That's
> where Kimberlite comes in.
> 
> 
> 


<Prev in Thread] Current Thread [Next in Thread>