LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: question about using MX's for load balancing

To: Kyle Sparger <ksparger@xxxxxxxxxxxxxxxxxxxx>
Subject: Re: question about using MX's for load balancing
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Jeremy Hansen <jeremy@xxxxxxxxxxxxx>
Date: Thu, 30 Mar 2000 17:59:44 -0500 (EST)
Well in my case the "business" is "email marketing" (I'm just an admin,
don't get your sniper riffles out please).  So it's not so much whether a
person will get their spam on time, but rather that we can push out enough
email fast enough to get our campaigns done in a 24 hour period, so this
is why we have to optimize where ever possible.

In our cast we're actually using the load balancing on the bounce returns,
they do a lot of bouncing processing and reports based on bounces,
replies, etc.  The company I'm working for bought another company and that
company has admins saying that using round robin dns and MX's with the
same priority make no difference then using a load balancer which may be
true in perfect conditions, but as you say, dns allows for much less
control over possibly bad situations.

-jeremy

> > I'm assuming once an MTA determines which host it decided to go to
> 
> I'd say this part right here is the whole reason to use LVS over DNS
> load balancing, in general. When you use DNS to balance the load, you cede
> control to the client/MTA/luck/whatever as to how well it gets balanced.  
> 
> What if a thousand clients decide, at the same time, to send mail to
> the first MX in the list?  That server gets hammered -- it's hard to
> predict when (if?) it'll happen, and how bad it will be.
> 
> With LVS, you retain that control.  Everyone outside goes to the same
> place, and then the director redistributes the load based on criteria you
> specify and control.  
> 
> Of course, SMTP is not a service that is "live" like HTTP.  People don't
> usually get too impatient if email gets delayed an hour or so, so *I*
> personally wouldn't bother with LVS for receiving email.  Your priorities
> (and budget) may be different though :)
> 
> Kyle Sparger
> 
> On Thu, 30 Mar 2000, Jeremy Hansen wrote:
> 
> > 
> > Basically I want some pros and cons on using MX's for load balancing mail
> > hosts.  I know this doesn't seem to be LVS related, but it is because I'm
> > currently using LVS for this operation and it's working perfectly, but
> > someone I met with yesterday is using another method, MX's with the same
> > priority to basically do the same thing, when he mentioned this it didn't
> > sit well with me for some reason but I couldn't put my finger on why and
> > mainly I think it has to do with fail over incase one of these MX hosts
> > are down.
> > 
> > In our particular situation our goal is high volume mail.  The goal is to
> > get as much mail out as fast as possible.
> > 
> > In a MX load balancing situation, it seems to me that although it may
> > work, it's not as efficient in the goal incase a machine is failed.
> > 
> > This is my understand on how it would work using MX's.  If whatever dns
> > determines to be the target host, if the host is failed, then the mail get
> > deferred to a secondary MX host and is held until the original targeted MX
> > host is back up then the secondary pushes that mail back to the original
> > target.  So mail is deferred and delayed.  Will this be the case even if
> > the MX priorities are the same?  I'm assuming once an MTA determines which
> > host it decided to go to, this address doesn't change even with the same
> > priority levels.
> > 
> > In an LVS situation (assuming you have a good setup and monitoring to take
> > downed hosts out of the LVS pool) this wouldn't happen.  You hit the
> > virtual address and it goes to a valid "up" machine and there is no delay.
> > 
> > So LVS still seems better, but I wanted to ask others incase my
> > assumptions are wrong.
> > 
> > Thanks
> > -jeremy
> > 
> > 
> > 
> > 
> 
> 
> 
> 
> 


http://www.xxedgexx.com | jeremy@xxxxxxxxxxxx
---------------------------------------------



<Prev in Thread] Current Thread [Next in Thread>