Quoting Peter Mueller <pmueller@xxxxxxxxxxxx>:
> Hello,
>
> > I've been using a LVS-NAT cluster for about 2-1/2 years,
> > since the VA-Linux 6.2
>
> Ditto, except LVS-DR from the start. New power supplies for fullon 2230's
> are $500 now! (Thank god for ebay)..
I've retired all our 2230's. They've all had very strange failures and/or
buggy hardware issues pop-up over the last year. So I've been replacing our
web/app servers with Penguin Relion 130s. GREAT cluster servers!
>
> > days, but I've never setup a LVS-DR cluster. Either I'm
> > missing something in
>
> On my LVS-DR setup I don't use loopback devices at all. AFAIK loopbacks are
> only for the real servers, and even there I use the transparant
> proxy-redirect method to bypass all that. E.g:
Yes, I know loopbacks are only for the real servers, but what if the real
servers are also the load balancers like in this example?
http://www.ultramonkey.org/2.0.1/topologies/sl-ha-lb-eg.html
I've been avoiding this explaination, but my purpose in doing this
configuration is to setup a poor-man's HA database with 2 boxes running
databases configured in MASTER/MASTER mode. Everyone seems to be using
heartbeat and drbd, but IMHO (and I'm gonna catch major flack on this, so i
apologize ahead of time), that is a terribly Neanderthal solution. I'd like to
use Keepalived to do database_listener-level checks as well as use the
wonderful VRRP failover between machines. If the primary DB instance fails,
but server A keeps running fine, I have the DB instance on server B configured
as a "sorry_server" using LVS-DR, and visa versa. This way I have near instant
DB failover, without the need for 30sec-5min failover time during database
integrity checks ... which is what's required with Heartbeat/DRBD.
But, I digress ... :-)
-Ken
|