Talk by Radware (www.radware.com)
at NCSA www.ncsysadmin.org 10 Oct 2005
Unfortunately I was the guy getting the pizzas for the
meeting, so I missed most of the talk (which I wanted
to see).
Radware is a commercial loadbalancer. It is used by Ebay and
Accuweather.
Radware has a NAT loadbalancing director that appears to
function similarly to an LVS-NAT director. The servers can
have private IPs.
Radware's loadbalancing director is only a small part of
their offering. Radware have boxes that filter based on
packet content (looking for viruses) that sit in the flow of
packets (possibly before the director, possibly after - did
find this out). They have boxes which just handle SYN
floods. They use SYN cookies and do a statistical analysis
of the packets, letting some through to see which machines
reply to the SYN-ACKs. Radware has a gui to controll the
loadbalancer, which can do things like shutting down some of
the backend servers at sometime in the future (eg 10pm) for
new connections, so that by 8am next morning these machine
have few or no connections and can be taken offline for
servicing. Much of their hardware is ASIC based.
Health checking seems to be done from the director, and
checks are made through to 3rd-Tier components of the
backend servers (<emphasis>e.g.</emphasis> database machines
behind the webservers that the client doesn't directly
connect to).
Each local NAT'ed load balancing setup is itself a member of
a distributed DNS-based load balancer. So www.foo.net might
have a loadbalanced set of servers in different sites eg
London, New York, San Francisco and Tokyo. Each local setup
has an authoritative nameserver for www.foo.net
The way is works is
o client in Scotland asks for the IP of www.foo.net
o the client's nameserver doesn't know the IP and asks a
root server for the machine authoritative for foo.net.
o The root server has a list of 4 authoritative nameservers
for foo.net and selects the next nameserver by round robin.
If the next one in its list is in New York, it tells the
client's nameserver to go query the nameserver in New York.
o The New York nameserver for foo.net measures the packet
latency to the client's nameserver and then returns the VIP
www.foo.net that is associated with the New York
installation of www.foo.net. The latency is propagated to
the other foo.net nameservers (in Tokyo, London and San
Francisco).
o Sometime later after the client's nameserver has flushed
the IP entry for www.foo.net from its cache, another (or the
same) client using the same nameserver asks for the IP of
www.foo.net again and this time the root server will
possibly send the request to another of the sites (say
London). The London machine already knows the latency from
New York to the client (without knowing where the client
is), and sees that its latency to the client is lower than
the latency from New York to the client, and returns the IP
of its copy of www.foo.net to the client. The London
nameserver also updates the latency tables at the other
sites (New York, San Francisco and Tokyo).
o If the next nameserver request from the client site is
sent to Tokyo, then the Tokyo machine updates the latency
tables in all the other nameservers, and knowing that the
latency is lowest to the London nameserver, returns the IP
of www.foo.net in London.
In this way the four nameserver accumulate the latencies to
all nameservers in the world. This works provided that the
latencies don't change a lot with time of day (or
throughput). The amount of memory required to do this must
be small - there can't be more than a million nameservers,
can there? 1 million 8 bit latencies is not much to store in
memory.
SSL accelarators
When I commented to the speaker that the main reason to use
SSL accelarators is to only have one copy of the
certificate, rather than one on each realserver, they said
"it's also for certificate management". Presumably some
sites have large numbers of certificates. (They didn't
disagree with my statement.)
The SSL accelarators in the Radware design don't sit between
the director and the realservers or in front of the director
(between the client and the director), but sit at the same
level as the other realservers. The https request is
balanced by the director to an accelarator, which decrypts
the packets and sends the decrypted packet back to the
director for loadbalancing as http traffic. Since the
director is a NAT balancer, the return http traffic from the
http servers, goes through the director, and then
recursively back to the SSL accelarator then back to the
director and then back to the client.
Being able to have the SSL accelarator as a realserver in
LVS would require the realservers to be a client of the
director, something that we can do for LVS-NAT, but not for
LVS-DR. If you need a realserver to be in the path in both
inward and outward directions (like an SSL accelarator) then
you will have to use LVS-NAT.
Joe
--
Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at http://www.wm7d.net/azproj.shtml
Homepage http://www.austintek.com/ It's GNU/Linux!
|