LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: RedHat ES3 LVS-Nat - Arp issues

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: RedHat ES3 LVS-Nat - Arp issues
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Tue, 28 Sep 2004 00:41:29 +0200
Hello,

have read throught most of the archives and have not been able
to find an answer to my problem, must be going blind from lack
of sleep so please be genlte if I've over looked something.

... so we are always ;).

This is a installation in our intranet.

Do you use Hubs or Switches?

There are two win2k IIS real servers behind the LVS.

ip: 172.24.24.21
nm: 255.255.255.0
gw: 172.24.24.1

gw must be the LVS then.

Am getting copious amounts of arpping and caching at eth0 on both LVS
routers. I'm expecting 4000 users to go though this LVS, will that much
arp traffic on the eth0 side kill connections? I have already increased
the arp cache size to 4096, but I'm
still getting overflows.

Which settings did you perform exactly?

Sep 23 13:11:45 wbnel01a kernel: NET: 30 messages suppressed.
Sep 23 13:11:45 wbnel01a kernel: Neighbour table overflow.
Sep 23 13:11:49 wbnel01a kernel: NET: 1 messages suppressed.
Sep 23 13:11:49 wbnel01a kernel: Neighbour table overflow.
Sep 23 13:11:52 wbnel01a kernel: NET: 19 messages suppressed.
Sep 23 13:11:52 wbnel01a kernel: Neighbour table overflow.
Sep 23 13:11:58 wbnel01a kernel: NET: 14 messages suppressed.
Sep 23 13:11:58 wbnel01a kernel: Neighbour table overflow.

What are your gc_thresh* settings? How big is your neighbour table?

primary = 10.0.2.32
primary_private = 172.24.24.2
service = lvs
backup_active = 1
backup = 10.0.2.33
backup_private = 172.24.24.3
heartbeat = 1
heartbeat_port = 539
keepalive = 4
deadtime = 8
network = nat
nat_router = 172.24.24.1 eth1:1
nat_nmask = 255.255.255.0
debug_level = NONE
virtual gnetest {
     active = 1
     address = 10.0.1.99 eth0:1
     vip_nmask = 255.255.248.0

why not 255.255.255.255?

     port = 80
     persistent = 3600

do you need such a high persistency?

-----
eth0 Link encap:Ethernet HWaddr 00:0D:60:9C:08:86 inet addr:10.0.2.32 Bcast:10.0.7.255 Mask:255.255.248.0 eth0:1 Link encap:Ethernet HWaddr 00:0D:60:9C:08:86 inet addr:10.0.1.99 Bcast:10.0.7.255 Mask:255.255.248.0

The VIP should have 255.255.255.255 as a mask.

eth0:88 Link encap:Ethernet HWaddr 00:0D:60:9C:08:86 inet addr:10.0.1.88 Bcast:10.0.7.255 Mask:255.255.248.0 eth0:89 Link encap:Ethernet HWaddr 00:0D:60:9C:08:86 inet addr:10.0.1.89 Bcast:10.0.7.255 Mask:255.255.248.0

What are eth0:88 and eth0:89 for?

eth1 Link encap:Ethernet HWaddr 00:0D:60:9C:08:87 inet addr:172.24.24.2 Bcast:172.24.24.255 Mask:255.255.255.0 eth1:1 Link encap:Ethernet HWaddr 00:0D:60:9C:08:87 inet addr:172.24.24.1 Bcast:172.24.24.255 Mask:255.255.255.0

I presume this is the active one.

eth0 Link encap:Ethernet HWaddr 00:0D:60:9C:EB:1A inet addr:10.0.2.33 Bcast:10.0.7.255 Mask:255.255.248.0 eth1 Link encap:Ethernet HWaddr 00:0D:60:9C:EB:1B inet addr:172.24.24.3 Bcast:172.24.24.255 Mask:255.255.255.0
Have I missed anything obvious in my config's or is this normal?

How does your ipvsadm table look like when it's busy?

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
<Prev in Thread] Current Thread [Next in Thread>