> > director = 2.4.19-rc1 + [base redhat .config] + ipvs patch
> + julian's hidden
> > patch + heartbeat + ldirectord. 2-700mhz P3 & 512mb of
> ram, 7200rpm scsi
> > disk, etc. (valinux 2230). << I am using 2.4.19-rc1
> because I have eepro
> > NICs >>.
>
> One of the most meaningful parameters: the PCI bus speed?
>
I believe one is ethernet pro (the one I'm using) is onboard and the other
is standard 32-bit PCI slot.
cat /proc/pci
Bus 0, device 11, function 0:
Ethernet controller: Intel Corp. 82557/8/9 [Ethernet Pro 100] (rev 8).
IRQ 18.
Master Capable. Latency=64. Min Gnt=8.Max Lat=56.
Non-prefetchable 32 bit memory at 0xf4200000 [0xf4200fff].
I/O at 0x2800 [0x283f].
Non-prefetchable 32 bit memory at 0xf4000000 [0xf40fffff].
Bus 0, device 14, function 0:
Ethernet controller: Intel Corp. 82557/8/9 [Ethernet Pro 100] (#2) (rev
8).
IRQ 21.
Master Capable. Latency=64. Min Gnt=8.Max Lat=56.
Non-prefetchable 32 bit memory at 0xf4203000 [0xf4203fff].
I/O at 0x2840 [0x287f].
Non-prefetchable 32 bit memory at 0xf4100000 [0xf41fffff].
> > real server = 2.2.x, apache, 2 proc, 512mb, requests set to
> reject according
> > to README file. specifically, srcnet is changed to
> 10.10.0.1 and then
> > make). I issue the command "route add -net 10.10.0.0
> netmask 255.255.0.0
> > reject" to drop/reject the packets from LVS-director during test.
>
> You can even use ipchains just to drop the flood. The
> route reject command wastes CPU in the RS.
>
OK, I was just following the first example I saw. I assume the cycles are
wasted because REJECT sends a packet back saying "no SYN for you!" (or
something like that), right?
> May be you have PCI 33MHz, ipchains rules, etc. The other
> problem could be the routing cache performance. You can reduce the
> number of clients to 10 just to see what happens.
I think I have PCI 33mhz. The director is 2.4.x kernel with essentially no
iptables rules.
Reducing the clients seemed to help all the machines become *MUCH* more
responsive..
<director>
0 0 0 0 199904 105580 50512 0 0 0 0 103 7 0 0
100
0 0 0 0 199900 105584 50512 0 0 0 5 105 9 0 0
100
0 0 2 0 199900 105584 50512 0 0 0 0 22946 3478 0 49
51
0 0 1 0 199900 105584 50512 0 0 0 5 41835 6295 0 91
9
0 0 1 0 199900 105584 50512 0 0 0 5 41988 6349 0 91
9
0 0 1 0 199900 105584 50512 0 0 0 0 41875 6350 0 92
8
0 0 1 0 199900 105584 50512 0 0 0 7 44765 4417 0 83
17
0 0 1 0 199900 105584 50512 0 0 0 0 42143 6668 0 86
14
0 0 1 0 199900 105584 50512 0 0 0 5 42010 6660 0 87
13
0 0 2 0 199900 105584 50512 0 0 0 5 42102 6651 0 87
13
0 0 1 0 199680 105584 50512 0 0 0 0 42000 6359 0 87
13
<real server>
[root@stage-fe2 testlvs-0.1]# ./show_traffic.sh
2 packets/sec
2 packets/sec
3 packets/sec
2 packets/sec
9 packets/sec
4945 packets/sec
18951 packets/sec
21056 packets/sec
21109 packets/sec
21057 packets/sec
21051 packets/sec
21045 packets/sec
21042 packets/sec
21037 packets/sec
Is this PCI bus the cause for 87% utilization at ~21000 packets/sec?
Thanks Julian & company.
Peter
|