LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

AW: DNS Server Cluster

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: AW: DNS Server Cluster
From: "Simon Pearce" <sp@xxxxxxxx>
Date: Tue, 28 Nov 2006 11:35:46 +0100
Hi,


First of all thanks for all of you postings i will try and answer some
of your questions.

 

I'm most intersted to know what the CPU load is, how much memory the
system has left, what the network load is (in packets/s), and of course,
how many connections are in the lvs table at the time.

The cpu load is between 1-2 here is the output of free on the first
director

lvs01 ~ # free -m
             total       used       free     shared    buffers
cached
Mem:          1011        243        768          0        163
20
-/+ buffers/cache:         58        952
Swap:         1953          0       1953

How would i go about showing you all the connections in the lvs table i
normaly check them with ipvsadm -L --stats but that output is quite long
due to the large number of IP's.


What have you done to confirm the source of the problem is the LVS
cluster and not in the the realservers? 

I think so because i can still query the realservers even during timeout
periods on certain IP adresses.


When this 'seems' to happen, can you reproduce it yourself? e.g. You
query the VIPS and get timeouts? And what if you query the realservers
from the directors? 

Querying the realservers from the director seems to work it does slow
down sometime then it takes about 4000msec to answer a dns query. 

*However* - Joe mentioned it already - I built it to balance on fwmarks,
not on TCP or UDP. Incoming packets were marked in the netfilter
'mangle' table according to protocol and port, and the LVS was then
built up from the corresponding fwmarks.

Do you think using fwmarks would be a better approach to the problem?
How would i go about setting up fwmarks if i understand you right all i
need to do is make sure all traffic for the dns ip's hit the firewall
the firewall marks the packet according to it's destination and passes
it on to the lvs. Which makes a routing decision by looking at the
packets mark. So i don't need to setup any vip's on the director?

For my education, why do you need a DNS server with 250 IPs? 
Is this because it's easier to set up 250IPs than to have everyone
change the DNS entry in their domain registration?

Because quite a few of our customers require there own dns servers with
there own ip address. A lot of them don't really need it as you quite
rightly suggest but it looks good to them anyway :)

can you do a test with conntrack off? Yes i was thinking of trying a
test without conntrack this week.

Can you setup ipvsadm with a single fwmark instead of all the IPs? That
would shift the responsibility for handling all the IPs to iptables,
rather than ipvsadm. I think i would need two fwmarks one for all the
primary connections and one for the secondary connections. I will look
into this.

Do you have a large iptables rule set that might be slowing things down?
iptables scales with O(n^2); still 250 IPs doesn't seem a lot of IPs.

No this is the output of iptables -L

lvs01 ~ # iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy DROP)
target     prot opt source               destination
ACCEPT     all  --  192.168.1.0/24       anywhere
ACCEPT     all  --  anywhere             192.168.1.0/24

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

All i really use is ip masquerading so that my realservers can access
the net to recieve updates everything else is left open.  

<Prev in Thread] Current Thread [Next in Thread>