this is exactly how i was doing it.
the problem was that i expected the data on the slaves to be realtime
- and the replication moved too slow. not to mention it broke a lot
(not sure why) - anytime there was a query that failed it halted
replication.
in which case, i had to either take the master down, or put it in
read-only mode, so i could copy the database over, and re-initialize
replication.
of course, this was on sub-par equipment, and maybe nowadays it runs
better. i'm thinking that the NDBcluster solution might be better
since it's designed for this more - replication still (afaik) sucks
because when you need to re-sync it requires the server to be down or
put in read-only, which is essentially downtime.
On 10/25/05, Troy Hakala <troy@xxxxxxxxxxxxxx> wrote:
> We're using master/slave replication. The LVS balances the reads from the
> slaves and the master isn't in the load-balancing pool at all. The app knows
> that writes go to the master and reads go the VIP for the slaves. Aside from
> replication latency, this works very reliably.
>
|