LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

RE: [LVS - NAT] alternatives

To: 'Don Hinshaw ' <dwh@xxxxxxxxxxxxxxxxx>
Subject: RE: [LVS - NAT] alternatives
Cc: "'lvs-users@xxxxxxxxxxxxxxxxxxxxxx'" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Peter Mueller <pmueller@xxxxxxxxxxxx>
Date: Sun, 9 Sep 2001 23:58:51 -0700
hi don,

very eloquent points!  my only real contention for your NAT perspective is
that sometimes servers need to be on the same subnet, with public IPs.  Thus
LVS-DR should appear viable from at least one architectural perspective.

Hmm well you've made me less shy about LVS-NAT, but I still favor DR.  Call
me stubborn.. :)

-----Original Message-----
From: Don Hinshaw
To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Sent: 9/9/01 8:37 AM
Subject: RE: [LVS - NAT] alternatives

Peter Mueller <pmueller@xxxxxxxxxxxx> said:

> I definitely wouldn't use nat, even
> on 2.4.  Sorry  I just don't see the advantage on a site expecting 20
> million hits a day.

More important than not seeing an advantage, is that you also don't see
any 
disadvantage.

NAT is not -bad-. The additional latencies are mostly irrelevant. The
only 
real issue is the sheer number of packets being handled by the network 
interfaces (as pointed out by Joe already).

All of the major commercial load balancers (Radware WSD, F5 BigIP,
Alteon 180 
series and Cisco LocalDirector) are NAT boxes. They work just fine. They
also 
all have optional gigabit interfaces. I worked at a company that was
handling 
30mil pages/day (~4 megabytes/sec average outbound traffic) on 10 Sun
220Rs 
behind Alteons. NAT was definately -not- an issue.

As far as I'm concerned, DR and TUN are nice toys, but not very useful.
They 
don't offer signicant advantages over NAT (in most applications) and
they 
require too much special handling to manage. I can see TUN -seeming to-
have 
an advantage in being able to balance across different datacenters, but 
that's largely offset by the single point of failure. I.e., if the
datacenter 
where the directors reside goes offline, all the clusters distributed to

other datacenters won't be getting any traffic anyway, unless you change
the 
DNS to point to another director in the other datacenter. In which case,
the 
F5 3DNS, which does metrics, would be a better solution than tunneling.

DR also seems to offer an advantage, but the packets are coming in
through a 
router and switch. The return packets may be bypassing the director, but
they 
are almost certainly going back out through the same switch and router.
The 
difference added by having the outbound packet headers re-written by the

director isn't enough to justify the added complexity of the system. 

The only real reason to even use DR is if the number of packets in/out 
exceeds what a gigabit connection can handle. But at what point does
that 
happen? I've personally seen a cluster of ~140 real-servers behind an F5

BigIP with gigabit interfaces that was handling > 20 megabytes/sec
sustained 
outbound traffic, most of which was small packets. I don't know how many

pages/day this was, but I can probably safely assume that it was about 5

times the system that I used to manage, so I can guess that it was
somewhere 
around 150 million pages/day. This was with NAT, and with a lot of
special 
packet handling scripts due to the complex multi-tier configuration of
the 
cluster, and yet the site was very fast. NAT was not an problem. 

I think that the HOWTO does a good job of pointing out the advantages of
DR 
and TUN, but it tends to give people the impression that they are
superior 
and that NAT is inferior, which it really isn't. DR and TUN are handy
-if you 
need them-. Do you need them? Probably not.

-=dwh=-

________________________________________________________________
http://www.OpenRecording.com For musicians by musicians.
Now with free Web-Based email too!


_______________________________________________
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://www.in-addr.de/mailman/listinfo/lvs-users


<Prev in Thread] Current Thread [Next in Thread>