On 06/18/2012, YesGood wrote:
> The build load balancer cluster of database, in this moment is an
> the response of why are:
> - I have web server in a load balancer environment, and
> - I search other system, which can offer load balancing.
> - I choice database, because it's would be interesting, in a much client
> - and so offer best service.
The definition of "best service" may vary - what's yours?
> In short of all response receive
> - build a load balance system of database is possible.
> - type are:
> - master-master (both offer write ), is most difficult and
> complicated. That can be worse than only server.
It did take Oracle a few years to write RAC and 11g, so it shouldn't be that
> - master-standby (write and only read), only one server the master
> can receive write request, and other server is standby and only receive
> read request. But need logic in app. What is it?
The application, who acts as a "client" to the database, needs to know
when it's fine to ask the "probably" outdated standby system. For
example, it may be fine to retrieve information for the product page of
an online shop from a standby system, who usually isn't off by more than
a few seconds.
However, when prices are being calculated at the paygate, your application
clearly wants to have accurate pricing and so ask the write master.
This is stuff which is clearly out of scope of what LVS can do. LVS is working
on OSI layers 2 to 5. Your database queries are being done on higher levels and
out of scope of the capabilities of LVS.
In terms of databases, the MySQL proxy tries to implement a highlevel
MySQL proxy is an application, who looks like "the database server" to
the client. It analyzes incoming SQL queries and according to one's own
may send some kind of queries to one node, while other queries are being sent
to a different node. However, the risk of doing it "wrong" (for the
application) is at that level still fairly high.
> - but build a load balance system for database, isn't best choice.
> Because could offer outdated data Right?
Either poor performance or outdated data. Or a hefty price tag.
Maybe more than one of them.
If the master has to ensure that all data is replayed on the standby
system, any write request may only be marked as "done" when it has been
written on both servers. So in the best case (parallel IO, very rare for
replication), the slowest of both servers defines the maximum performance.
Usually, your write requests in synchronous IO need to wait for the
master to complete, the master in behind asks the standby to complete
the write request, and when both have agreed on being "done", the master
reports the write request as "done" to the client application. So
basically, this setup is adding about every possible latencies.
When the master is reporting something as "done", while it hasn't yet made
it to the standby system, you may describe this as an asynchronous
operation. While "usually", the time needed to process the write
requests on the standby system may be very low and barely recognizable,
there may also other situations occur. For example, the standby server
has to rebuild a broken disk in its local RAID, and so a lot of IO
performance goes to rebuilding the disk instead of the DBMS. Instead of
a few milliseconds, a write request may need at least a few seconds to proceed
from master to standby.
If the application doesn't know when it's fine to ask the standby system and
it has to ask the master, this may also result in data loss:
1: application writes to master
2: application reads from standby before changes have been replicated from
3: application uses the read data to compute changes, which are again
written to master. Result: the changes from step 1 are lost.
Hefty price tag:
In clusters of in-memory databases, don't need to rely that much on disk
as data is being stored in RAM. Once Infiniband or Myrinet network
adapters are installed for the inter-cluster-communication, the network
latency decreases quite a lot (compared to Ethernet), so one may run
with synchronous data replication at still a high performance.
However, your 100G-Database will also require 100G of RAM - per node.
And in order to reduce network latency, you may need to install a
secondary network for replication of your database data, ideally using
low-latency network adapters like Infiniband or Myrinet. Which is also
not that cheap like standard ethernet.
> - Now, are there some references, bibliography that speak about that?
Hmmm. I consider this as being basic knowledge in computer science.
Starting from simple physics to CAP theorem, to fallacies of distributed
For example: in terms of storage speed and latency, the caches in a CPU is
usually fastest, then it's "usual" RAM, afterwards local disk storage, and
later ethernet network.
If one application needs to access a networked remote disk storage, you
can simply add up two of the worst latencies and see this performance
won't be that fast like with about any given local disk storage.
1&1 Internet AG Expert Systems Architect (IT Operations)
Brauerstrasse 50 v://49.721.91374.0
D-76135 Karlsruhe f://49.721.91374.225
Amtsgericht Montabaur HRB 6484
Vorstände: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich,
Robert Hoffmann, Andreas Hofmann, Markus Huhn, Hans-Henning Kettler,
Dr. Oliver Mauss, Jan Oetjen
Aufsichtsratsvorsitzender: Michael Scheeren
Please read the documentation before posting - it's available at:
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users