Hi Wensong,
Thanks for the great help. Just a couple of queries if you have time:
>Well, fault tolerance and high availability are different concepts. I
>think fault tolerance should be higher than high availability. If a
>service is down for seconds and a minutes and existing connections may be
>lost, then the service is up to accept new connections, we can say the
>service is highly available. Fault tolerance probably means that once the
>connection is accepted, it should be carried out, despite of partial
>hardware or software failure.
Ok, I always get those terms confused. What I really am looking to do is
setup a redundant highly available database in my server farm. I want a
database to store my transactions and I don't want to lose any data when
the database server fails. (At which time I want to automatically switch
over to a secondary database)
>If you just want to make a database system of 2 nodes highly available.
>You can have a look at http://www.linux-ha.org/ for heartbeat code, the
>primary database server and the backup can heartbeat, if the primary
>fails, the backup will take over the db service.
Thanks, but I've looked at this site and I am still a little confused about
the options for the database. I can use the heartbeat and fake methods
to build a redundant load balancer for the farm, but currently for redundancy
it assumes that the real servers don't persistant any state (or the client would
need to go back to the same PC).
Is there any white papers, or documents on my options. As far as I can see, I
either need to run a database on each real server and set up a (complex?)
distributed database synchronisation and lock manager (How much resourcing
would this tie up to keep all databases in sync, and wouldn't this negate any
benefit of the load balancing? (althoygh it would be redundant in a failure).
Or,
is there something I'm not seeing - what is this shared-access RAID type server,
is this where all server's in the farm share access to a database machine. If
so, then is this going to be a bottleneck for the farm (my application is pretty
much a big data warehousing system).
If all these databases need to be kept in sync (or atleast 2 if I have a
dedicated
server and a backup) then what is everyone using for local interconnects?
100Base-T,
or something faster since this would be a lot of traffic?
Thanks for any additional advice,
Peter
----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx
|