LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Possibly beating a dead horse - IPVS-NAT - 1 Director, 2

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] Possibly beating a dead horse - IPVS-NAT - 1 Director, 2 Realservers
From: Graeme Fowler <graeme@xxxxxxxxxxx>
Date: Mon, 02 Jun 2008 10:50:32 +0100
Hi

On Sat, 2008-05-31 at 09:14 -0500, Memblin wrote:
>  If I remove the realserver from the ipvs config that is currently getting
> all the traffice (say num 1), the traffic does go to the other realserver
> (say num 2). Then when I put number 1 back into the ipvs configuation
> the traffice stays with number 2.  If I do nslookups from a windows
> box it looks like Windows actually asks for like 8 things per single
> request and those get spread out between both servers normally.

We need to look a bit more deeply here at what's happening.

Firstly, you're using NAT - this makes monitoring things a bit easier
because the director has an idea of connection state due to it seeing
all packets in both directions (in -DR it can have an idea of state but
can get a little confused if packets go astray, which is why -DR tends
to report "inactconn" rather than "actconn").

Secondly, you're configuring a connectionless service. DNS over UDP is
"fire and forget" - the application is responsible for retries, rather
than the OS protocol stack - and (this is the important bit) UDP is
stateless.

If you look at the connections from a single client on an unloaded
system, you're likely to see what you do see - every packet goes to the
same host, because LVS views them as a "session".

On the balance of probabilities, a DNS server will receive many
thousands of queries from many hundreds/thousands (or more) of hosts.
Over time from a cold start to fully loaded, they will be shared out
amongst the realservers according to the scheduler being used and the
weight assigned (a word of advice - set the weights to something higher
than 1. It makes future tuning *far* easier if you start with everything
having a weight of 100, say).

You could also look at "ipvsadm --set" and tune down the UDP timeout to
something like one or two seconds.

Unfortunately, using a small number of clients (where small is some
number greater than 0) doesn't give real-world results.

Graeme



<Prev in Thread] Current Thread [Next in Thread>