LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: RedHat ES3 LVS-Nat - Arp issues

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>, Roberto Nibali <ratz@xxxxxxxxxxxx>
Subject: Re: RedHat ES3 LVS-Nat - Arp issues
From: "Michael Sztachanski" <michael.sztachanski@xxxxxxxxxxxxxxxxxxxxxx>
Date: Tue, 28 Sep 2004 19:20:00 +1000
Thanks for your reply Roberto.

>Hello,

>> have read throught most of the archives and have not been able
>> to find an answer to my problem, must be going blind from lack
>> of sleep so please be genlte if I've over looked something.
>
>... so we are always ;).

>> This is a installation in our intranet.
>
>Do you use Hubs or Switches?

Cisco Switches, not sure of their config as they are handled by our network 
services department.

>> There are two win2k IIS real servers behind the LVS.
>> 
>> ip: 172.24.24.21
>> nm: 255.255.255.0
>> gw: 172.24.24.1
>
>gw must be the LVS then.

Yes, the gw is the internel VIP.

>> Am getting copious amounts of arpping and caching at eth0 on both LVS
>> routers. I'm expecting 4000 users to go though this LVS, will that much
>> arp traffic on the eth0 side kill connections? I have already increased
>> the arp cache size to 4096, but I'm
>> still getting overflows.
>
>Which settings did you perform exactly?

Adjusted the gc_thresh from 1024 in  
/proc/sys/net/ipv4/neigh/default/gc_thresh3 to 4096.

>> Sep 23 13:11:45 wbnel01a kernel: NET: 30 messages suppressed.
>> Sep 23 13:11:45 wbnel01a kernel: Neighbour table overflow.
>> Sep 23 13:11:49 wbnel01a kernel: NET: 1 messages suppressed.
>> Sep 23 13:11:49 wbnel01a kernel: Neighbour table overflow.
>> Sep 23 13:11:52 wbnel01a kernel: NET: 19 messages suppressed.
>> Sep 23 13:11:52 wbnel01a kernel: Neighbour table overflow.
>> Sep 23 13:11:58 wbnel01a kernel: NET: 14 messages suppressed.
>> Sep 23 13:11:58 wbnel01a kernel: Neighbour table overflow.
>
>What are your gc_thresh* settings? How big is your neighbour table?

there are over 1900 entries

>> primary = 10.0.2.32
>> primary_private = 172.24.24.2
>> service = lvs
>> backup_active = 1
>> backup = 10.0.2.33
>> backup_private = 172.24.24.3
>> heartbeat = 1
>> heartbeat_port = 539
>> keepalive = 4
>> deadtime = 8
>> network = nat
>> nat_router = 172.24.24.1 eth1:1
>> nat_nmask = 255.255.255.0
>> debug_level = NONE
>> virtual gnetest {
>>      active = 1
>>      address = 10.0.1.99 eth0:1
>>      vip_nmask = 255.255.248.0
>
>why not 255.255.255.255?

Are you asking about the vip_netmask or the nat_netmask? The netmaks shown are 
our internal masks. The values are as per the RH documentation. There was no 
mention of the value you have mentioned.

>>      port = 80
>>      persistent = 3600
>
>do you need such a high persistency?

The Users talk to Web App on IIS web servers that talk to a database that 
requires a minium of 1hr persistancy.

>> -----
>> eth0      Link encap:Ethernet  HWaddr 00:0D:60:9C:08:86  
>>           inet addr:10.0.2.32  Bcast:10.0.7.255  Mask:255.255.248.0
>> eth0:1    Link encap:Ethernet  HWaddr 00:0D:60:9C:08:86  
>>           inet addr:10.0.1.99  Bcast:10.0.7.255  Mask:255.255.248.0
>
The VIP should have 255.255.255.255 as a mask.

The RH Doco had 255.255.255.0. Sorry for my ignorancy. What is the reason for 
this.

>> eth0:88   Link encap:Ethernet  HWaddr 00:0D:60:9C:08:86  
>>           inet addr:10.0.1.88  Bcast:10.0.7.255  Mask:255.255.248.0
>> eth0:89   Link encap:Ethernet  HWaddr 00:0D:60:9C:08:86  
>>           inet addr:10.0.1.89  Bcast:10.0.7.255  Mask:255.255.248.0
>
>What are eth0:88 and eth0:89 for?

These are external address 10.0.1.88 and 89 that I'vve NATed in the iptables 
rules  to 172.24.24.2 and 3. This is so the developers can RDP to each box 
individually.

>> eth1      Link encap:Ethernet  HWaddr 00:0D:60:9C:08:87  
>>           inet addr:172.24.24.2  Bcast:172.24.24.255  Mask:255.255.255.0
>> eth1:1    Link encap:Ethernet  HWaddr 00:0D:60:9C:08:87  
>>           inet addr:172.24.24.1  Bcast:172.24.24.255  Mask:255.255.255.0
>
>I presume this is the active one.

eth1:1 is the active internal VIP.

>> eth0      Link encap:Ethernet  HWaddr 00:0D:60:9C:EB:1A  
>>           inet addr:10.0.2.33  Bcast:10.0.7.255  Mask:255.255.248.0
>> eth1      Link encap:Ethernet  HWaddr 00:0D:60:9C:EB:1B  
>>           inet addr:172.24.24.3  Bcast:172.24.24.255  Mask:255.255.255.0
>> Have I missed anything obvious in my config's or is this normal?
>
>How does your ipvsadm table look like when it's busy?

Taken at 16:25 form the RH Web GUI.
IP Virtual Server version 1.0.8 (size=65536)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.1.99:80 rr persistent 360000 FFFFFFFF
-> 172.24.24.21:80 Masq 1 540 8
-> 172.24.24.22:80 Masq 1 639 1



Best regards,
Roberto Nibali, ratz
-- 
echo 
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
_______________________________________________
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://www.in-addr.de/mailman/listinfo/lvs-users


----

Michael Sztachanski

Enterprise Web Services
CADRE TECHNOLOGY
(A Division of Flight Centre Ltd)
Level 1, 157 Ann Street, BRISBANE QLD 4000

Phone: +61 (0)7 3011 7151
Fax: +61 (0)7 3001 7788

Email: michael.sztachanski@xxxxxxxxxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>