Hi Don,
> > Cisco knows the triangulation more which can be looked at as the LVS_DR
> > method.
>
> It is similar, according to:
> http://www.cisco.com/warp/public/cc/pd/si/11000/prodlit/cscnt_wi.htm
Interesting, I have to finish reading it when I get some spare time ...
> "Cisco CSS 11000 series switches use NAT peering to direct requests to the
> best site with the requested content based on URL or file type, geographic
> proximity, and server/network loads, avoiding the limitations of DNS-based
> site selection and the overhead HTTP redirect.
Ok, I didn't know about the CSS 11000. But generally speaking, the triangulation
method or asymmetric load balancing or direct routing had been implemented
without NAT on the common CISCO local director. Since I don't work with such
devices anymore my knowledge is quite dusty regarding CISCO load balancers.
> If a Cisco CSS 11000 series switch receives a request for content and the
> cache or server hosting that content is down or busy, the switch uses
> information communicated among its peers (that is, other Cisco switches) at
> other points of presence (POPs) to locate the best site and server. When the
> best destination site is determined, the request is directed to that site and
> the source IP address is translated to the "new" IP address where the content
> resides.
Ok.
> Cisco NAT peering preserves the original source IP address from the client so
> that the new site can send the response directly back to the client. NAT
> peering acts as "triangulation protocol," allowing the response to be
> delivered directly to the user over the shortest Internet path."
Sort of transparent proxying mode.
> So the triangulation is a function of "NAT peering", which is a Cisco
> proprietary function. Also, the switch is making decisions based on metrics,
> such as server/network loads and geographic locations. The NAT peering is
> transferring the request to another site, which is behaviour that is more
> like a combination of LVS_DR and LVS_TUN, but it's only activating that
> behaviour when the metrics indicate the need for it.
Thanks for the explanation.
> Heh, I don't have experience with all the products from each vendor. I don't
> doubt that any of them can be broken under the right circumstances or loads.
;) no problem, as you see, I missed some recent CISCO development too.
> > I intend to disagree on that one considering the fact that you can use
> > one load balancer cluster to load balance for example 24 zones with 24
> > NICs when using LVS_DR. This you cannot do with LVS_NAT because of interrupt
> > latencies caused by interefering network drivers trying to jump into
> > the network BH for NAT. For high performance and cheap & reliable solutions
> > for datacenters, you don't have the choice.
>
> I don't understand what you mean. What does this mean, "24 zone with 24 NICs"?
Ok, given you want to build a hosting and datacenter for a huge ISP. This
ISP has the resources and connections to aquire potential hosting partners
or customers which cannot afford an own datacenter but will want to be present
on the Internet with bleeding edge technology and high availability and
security. Now, for one aspect of HA you decide to create a DMZ for every
new incoming customer which wants to host his services in this datacenter.
You have two choices:
o For every DMZ zone behind the packetfilter you buy one, maybe
two commercial load balancer / per customer. So every customer
gets the idea that he owns a lb of his own. The costs for the
ISP providing this HA solution are too high compared to what they
can ask the customer for without loosing important market share
(yes, I know there are marketing technical terms for it, but I'm
not a manager)
o You take one single or for HA of the lb two linux boxes, and
simulate every zone ("dmz of the according customer") with a
NIC. So for example if you put in 24 NICs into one lb the ISP
can sell the same load balancer 24 times to a customer but only
pays one piece. The end customer still gets the impression of
having his own loadbalancer. If you write intelligent software
to manage such a biest, you're definitely ready for the lb
market.
Did I make my point clear? And this is where you might run into
problems with NAT faster then with DR.
> I don't know how long the requests were sustained. The cluster was managed by
> a close friend of mine and the .com that owned it has gone under and the
> cluster was dismantled and auctioned off. Yes, they were dynamic (mostly). I
> don't think the db queries were long, the db servers were Sequent boxes (8 or
> 10 of them as I recall) with 8 p3/500s and Oracle.
To decide if going with NAT and DR you still have to do field testing
and tuning. Every load balanced application has its tricky part. If
db queries take too long you might have problems with the NAT table in
the kernel. I only mention it because basically I agree the NAT and DR
can be put into the same pot for most of the cases but one has to have
the ideas of possible problems in his mind.
> > Good point for the howto. Indeed, with nowadays Internet technology and
> > computer technology we should not give the reader the impression that as
> > soon as he wants to load balance a moderately surfed site he needs to
> > choose LVS_DR.
>
> Exactly. DR and TUN are valuable, but you have to have a VERY busy site, or
> other special circumstances to really need them.
It's a tradeoff between how much time the implementation manager gives
you to fiddle around with the arp problem and how saturated the lb will
really be when not choosing DR or TUN. Managers tend to use low cost _and_
time friendly solutions which don't mix well with the DR approach, when
done the first time. I would say: Instead of pissing everyone off trying
to set up DR to impress the boss, you'd rather go and impress him with
a NAT setup done in 2 hours (with testing).
Best regards,
Roberto Nibali, ratz
--
mailto: `echo NrOatSz@xxxxxxxxx | sed 's/[NOSPAM]//g'`
|