LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Using LVS for MySQL traffic

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: Using LVS for MySQL traffic
From: Todd Lyons <tlyons@xxxxxxxxxx>
Date: Wed, 26 Oct 2005 10:55:14 -0700
We are continually investigating load balancing advancements in the
mysql arena, and I would like to add our comments to this thread, but
I'll try to keep as much of it on-topic as possible.

Mark wanted us to know:

>> > We're using master/slave replication. The LVS balances the reads
>> > from the slaves and the master isn't in the load-balancing pool at
>> > all. The app knows that writes go to the master and reads go the
>> > VIP for the slaves. Aside from replication latency, this works very
>> > reliably.

This works for us too.  We have an inhouse DBI wrapper that knows to
read from slaves and write to the master.  Reads are directed to a
master for a certain number of seconds until we are reasonably sure that
the data has been replicated out.

>> this is exactly how i was doing it.
>> 
>> the problem was that i expected the data on the slaves to be realtime
>> - and the replication moved too slow. not to mention it broke 
>> a lot (not sure why) - anytime there was a query that failed 
>> it halted replication.

We've not seen this.  Our replication has been very stable, and our
system is roughly 80% read, 20% write.  But we do as few reads as
possible on the master, keeping all of the reads on the slaves, doing
writes to the master.

>> in which case, i had to either take the master down, or put 
>> it in read-only mode, so i could copy the database over, and 
>> re-initialize replication.

Make a tarball of the databases that you replicate and keep that tarball
someplace available for all slaves.  Record the replication point and
filename.  Then when you have to restart a slave, blow away the db,
extract the tarball, configure the info file to the correct data, then
start mysql and it will populate itself from the bin files on the
master.  For us, we try to do it once per month, but I've seen 100 day
bin file processing and it only took about 30 minutes to catch the
database up.  We have gig networking though, so your results may vary.

We also have a script that connects and checks the replication status
for each slave and spits out email warnings if the bin logs don't match.
It doesn't check position because it frequently changes mid-script,
sometimes more than once.  Here's an example:

[todd@tlyons ~]$ checkmysqlslave.sh 
Status Check of MySQL slave replication.
Log file names should match exactly, and
log positions should be relatively close.

Machine      : sql51
Log File     : fs51-bin.395
Log Position : 18123717

Machine      : sql52
Log File     : fs51-bin.395
Log Position : 18123863

Machine      : images3
Log File     : fs51-bin.395
Log Position : 18123863

Machine      : images4
Log File     : fs51-bin.395
Log Position : 18124769

Machine      : images5
Log File     : fs51-bin.395
Log Position : 18125333

Machine      : images6
Log File     : fs51-bin.395
Log Position : 18125333

>> of course, this was on sub-par equipment, and maybe nowadays 
>> it runs better. i'm thinking that the NDBcluster solution 
>> might be better since it's designed for this more - 
>> replication still (afaik) sucks because when you need to 
>> re-sync it requires the server to be down or put in 
>> read-only, which is essentially downtime.
>> 
>The main problem I see with NDB is that it requires the entire database
>(or whatever part each node is reponsible for) to be in memory.  It can

We have had very stable results from mysql stable releases doing
replication.  We typically use the mysql built rpms instead of the
vendor rpms.  We had serious stability problems with NDB, but we were
also trying a very early version.  It is liable to be much more solid
now.

>not work with files the way normal InnoDB, MyIsam, etc engines work.
>And I think this is still the case in MySQL 5 (correct me if I'm
>wrong.) I don't know what happens if your storage node suddenly runs
>out of Memory.

It won't be until MySQL 5.1 that you'll be able do file based
clustering.  Then load balancing becomes simple, for both reads and
writes.

>But I guess we're drifting off topic...

Sorry for not bringing it back ontopic as much as I had hoped.

I am trying to keep it on topic. :-)
-- 
Regards...              Todd
OS X: We've been fighting the "It's a mac" syndrome with upper management
for  years  now.  Lately  we've  taken  to  just  referring  to  new  mac 
installations  as  "Unix"  installations  when  presenting proposals  and 
updates.  For some reason, they have no problem with that.          -- /.
Linux kernel 2.6.12-12mdksmp   5 users,  load average: 2.46, 2.47, 2.00

<Prev in Thread] Current Thread [Next in Thread>