> Sorry, but I simply don't understand this. iptables is a user space
> command which cannot be started or stopped. It's a command line tool and
> has little to do with your problem. Is the connection tracking still
> running in the kernel? What does your lsmod show?
Sure it gets unload via the 'service' command
[root@loadb1 ha.d]# service iptables stop
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
lsmod doesn't show it running.
> What kind of page do you fetch with this? Static or dynamic?
Simple static page.
> What's its size?
Under 5k for the testing. Page is much bigger for the real content now,
still static. See below
> BTW, with 2.6 kernel test clients spawning 1000 threads sometimes
> lead to stalls due to the local_port_range and gc cleanups. What's your
> local port range settings on your client? Also please show the ulimit -a
> command output right before your start your test conducts.
[root@zeus ~]# ab -n 300000 -c 1000 http://67.72.106.71/
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation,
http://www.apache.org/
Benchmarking 67.72.106.71 (be patient)
[root@zeus ~]# ab -n 100000 -c 1000 http://67.72.106.71/
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation,
http://www.apache.org/
Benchmarking 67.72.106.71 (be patient)
[root@zeus ~]# ab -n 10000 -c 1000 http://67.72.106.71/
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation,
http://www.apache.org/
Benchmarking 67.72.106.71 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Finished 10000 requests
Server Software: lighttpd
Server Hostname: 67.72.106.71
Server Port: 80
Document Path: /
Document Length: 7327 bytes
Concurrency Level: 1000
Time taken for tests: 10.679202 seconds
Complete requests: 10000
Failed requests: 5694
(Connect: 0, Length: 5694, Exceptions: 0)
Write errors: 0
Total transferred: 122363820 bytes
HTML transferred: 119753282 bytes
Requests per second: 936.40 [#/sec] (mean)
Time per request: 1067.920 [ms] (mean)
Time per request: 1.068 [ms] (mean, across all concurrent requests)
Transfer rate: 11189.51 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 8 168 666.8 20 9020
Processing: 21 544 1336.0 102 10609
Waiting: 8 279 1072.8 21 9032
Total: 32 713 1476.9 127 10647
Percentage of the requests served within a certain time (ms)
50% 127
66% 341
75% 379
80% 760
90% 3069
95% 3308
98% 4716
99% 9149
100% 10647 (longest request)
[root@zeus ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 16383
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root@zeus ~]# sysctl -a | grep local_
net.ipv4.ip_local_port_range = 32768 61000
> ??? In both traces you have the LB enabled? Or did you mean netfilter?
Iptables was disabled in the second case
I see now :). What are ab's conclusions when you run those tests? How
many dropped connections, how many packets ... and so one.
With iptables enabled the IP address stops responding on the test client
server (zeus)
> Could you send along the ethtool $intf and ethtool -k $intf output?
[root@loadb1 ha.d]# ethtool eth0
Settings for eth0:
Supported ports: [ MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
Link detected: yes
[root@loadb1 ha.d]# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: off
> Please show cat /proc/interrupts and /proc/slabinfo
[root@loadb1 ha.d]# cat /proc/interrupts
CPU0 CPU1
0: 26109146 26134774 IO-APIC-edge timer
4: 822532 821228 IO-APIC-edge serial
8: 0 1 IO-APIC-edge rtc
9: 0 0 IO-APIC-level acpi
10: 0 2 IO-APIC-level ehci_hcd, ohci_hcd, ohci_hcd
11: 0 0 IO-APIC-level libata
14: 234713 234473 IO-APIC-edge ide0
177: 23675 32438 IO-APIC-level 3ware Storage Controller
185: 0 1001131 IO-APIC-level eth0
193: 309782 257 IO-APIC-level eth1
NMI: 0 0
LOC: 52246993 52246992
ERR: 0
MIS: 0
[root@loadb1 ha.d]# cat /proc/slabinfo
slabinfo - version: 2.0
# name <active_objs> <num_objs> <objsize> <objperslab>
<pagesperslab> : tunables <batchcount> <limit> <sharedfactor> : slabdata
<active_slabs> <num_slabs> <sharedavail>
ip_vs_conn 2 20 192 20 1 : tunables 120 60 8 :
slabdata 1 1 0
fib6_nodes 7 119 32 119 1 : tunables 120 60 8 :
slabdata 1 1 0
ip6_dst_cache 7 15 256 15 1 : tunables 120 60 8 :
slabdata 1 1 0
ndisc_cache 1 20 192 20 1 : tunables 120 60 8 :
slabdata 1 1 0
rawv6_sock 4 11 704 11 2 : tunables 54 27 8 :
slabdata 1 1 0
udpv6_sock 1 11 704 11 2 : tunables 54 27 8 :
slabdata 1 1 0
tcpv6_sock 2 3 1216 3 1 : tunables 24 12 8 :
slabdata 1 1 0
ip_fib_alias 16 226 16 226 1 : tunables 120 60 8 :
slabdata 1 1 0
ip_fib_hash 16 119 32 119 1 : tunables 120 60 8 :
slabdata 1 1 0
dm_tio 0 0 16 226 1 : tunables 120 60 8 :
slabdata 0 0 0
dm_io 0 0 20 185 1 : tunables 120 60 8 :
slabdata 0 0 0
ext3_inode_cache 80311 80325 552 7 1 : tunables 54 27 8 :
slabdata 11475 11475 0
ext3_xattr 0 0 48 81 1 : tunables 120 60 8 :
slabdata 0 0 0
journal_handle 19 135 28 135 1 : tunables 120 60 8 :
slabdata 1 1 0
journal_head 38 243 48 81 1 : tunables 120 60 8 :
slabdata 3 3 0
revoke_table 8 290 12 290 1 : tunables 120 60 8 :
slabdata 1 1 0
revoke_record 0 0 16 226 1 : tunables 120 60 8 :
slabdata 0 0 0
scsi_cmd_cache 7 20 384 10 1 : tunables 54 27 8 :
slabdata 2 2 0
sgpool-128 32 33 2560 3 2 : tunables 24 12 8 :
slabdata 11 11 0
sgpool-64 32 33 1280 3 1 : tunables 24 12 8 :
slabdata 11 11 0
sgpool-32 32 36 640 6 1 : tunables 54 27 8 :
slabdata 6 6 0
sgpool-16 33 36 320 12 1 : tunables 54 27 8 :
slabdata 3 3 0
sgpool-8 50 60 192 20 1 : tunables 120 60 8 :
slabdata 3 3 0
unix_sock 52 70 512 7 1 : tunables 54 27 8 :
slabdata 10 10 0
ip_mrt_cache 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
tcp_tw_bucket 69 93 128 31 1 : tunables 120 60 8 :
slabdata 3 3 0
tcp_bind_bucket 68 226 16 226 1 : tunables 120 60 8 :
slabdata 1 1 0
tcp_open_request 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
inet_peer_cache 5 61 64 61 1 : tunables 120 60 8 :
slabdata 1 1 0
secpath_cache 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
xfrm_dst_cache 0 0 256 15 1 : tunables 120 60 8 :
slabdata 0 0 0
ip_dst_cache 34 60 256 15 1 : tunables 120 60 8 :
slabdata 4 4 0
arp_cache 10 40 192 20 1 : tunables 120 60 8 :
slabdata 2 2 0
raw_sock 5 7 576 7 1 : tunables 54 27 8 :
slabdata 1 1 0
udp_sock 11 21 576 7 1 : tunables 54 27 8 :
slabdata 3 3 0
tcp_sock 9 14 1152 7 2 : tunables 24 12 8 :
slabdata 2 2 0
flow_cache 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
mqueue_inode_cache 1 7 576 7 1 : tunables 54 27 8
: slabdata 1 1 0
relayfs_inode_cache 0 0 348 11 1 : tunables 54 27 8
: slabdata 0 0 0
isofs_inode_cache 0 0 372 10 1 : tunables 54 27 8 :
slabdata 0 0 0
hugetlbfs_inode_cache 1 11 344 11 1 : tunables 54 27
8 : slabdata 1 1 0
ext2_inode_cache 0 0 488 8 1 : tunables 54 27 8 :
slabdata 0 0 0
ext2_xattr 0 0 48 81 1 : tunables 120 60 8 :
slabdata 0 0 0
dquot 0 0 144 27 1 : tunables 120 60 8 :
slabdata 0 0 0
eventpoll_pwq 0 0 36 107 1 : tunables 120 60 8 :
slabdata 0 0 0
eventpoll_epi 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
kioctx 0 0 192 20 1 : tunables 120 60 8 :
slabdata 0 0 0
kiocb 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
dnotify_cache 1 185 20 185 1 : tunables 120 60 8 :
slabdata 1 1 0
fasync_cache 0 0 16 226 1 : tunables 120 60 8 :
slabdata 0 0 0
shmem_inode_cache 294 297 444 9 1 : tunables 54 27 8 :
slabdata 33 33 0
posix_timers_cache 0 0 112 35 1 : tunables 120 60 8
: slabdata 0 0 0
uid_cache 6 61 64 61 1 : tunables 120 60 8 :
slabdata 1 1 0
cfq_pool 85 119 32 119 1 : tunables 120 60 8 :
slabdata 1 1 0
crq_pool 25 192 40 96 1 : tunables 120 60 8 :
slabdata 2 2 0
deadline_drq 0 0 52 75 1 : tunables 120 60 8 :
slabdata 0 0 0
as_arq 0 0 64 61 1 : tunables 120 60 8 :
slabdata 0 0 0
blkdev_ioc 24 185 20 185 1 : tunables 120 60 8 :
slabdata 1 1 0
blkdev_queue 20 24 488 8 1 : tunables 54 27 8 :
slabdata 3 3 0
blkdev_requests 19 50 160 25 1 : tunables 120 60 8 :
slabdata 2 2 0
biovec-(256) 256 256 3072 2 2 : tunables 24 12 8 :
slabdata 128 128 0
biovec-128 256 260 1536 5 2 : tunables 24 12 8 :
slabdata 52 52 0
biovec-64 256 260 768 5 1 : tunables 54 27 8 :
slabdata 52 52 0
biovec-16 256 260 192 20 1 : tunables 120 60 8 :
slabdata 13 13 0
biovec-4 256 305 64 61 1 : tunables 120 60 8 :
slabdata 5 5 0
biovec-1 301 452 16 226 1 : tunables 120 60 8 :
slabdata 2 2 0
bio 266 279 128 31 1 : tunables 120 60 8 :
slabdata 9 9 0
file_lock_cache 2 41 96 41 1 : tunables 120 60 8 :
slabdata 1 1 0
sock_inode_cache 126 126 448 9 1 : tunables 54 27 8 :
slabdata 14 14 0
skbuff_head_cache 514 640 192 20 1 : tunables 120 60 8 :
slabdata 32 32 1
sock 6 20 384 10 1 : tunables 54 27 8 :
slabdata 2 2 0
proc_inode_cache 346 374 360 11 1 : tunables 54 27 8 :
slabdata 34 34 0
sigqueue 67 135 148 27 1 : tunables 120 60 8 :
slabdata 5 5 0
radix_tree_node 8985 9002 276 14 1 : tunables 54 27 8 :
slabdata 643 643 0
bdev_cache 28 28 512 7 1 : tunables 54 27 8 :
slabdata 4 4 0
mnt_cache 29 62 128 31 1 : tunables 120 60 8 :
slabdata 2 2 0
audit_watch_cache 0 0 48 81 1 : tunables 120 60 8 :
slabdata 0 0 0
inode_cache 1442 1463 344 11 1 : tunables 54 27 8 :
slabdata 133 133 0
dentry_cache 87279 87282 152 26 1 : tunables 120 60 8 :
slabdata 3357 3357 0
filp 501 800 192 20 1 : tunables 120 60 8 :
slabdata 40 40 0
names_cache 20 20 4096 1 1 : tunables 24 12 8 :
slabdata 20 20 0
avc_node 235 750 52 75 1 : tunables 120 60 8 :
slabdata 10 10 0
key_jar 12 31 128 31 1 : tunables 120 60 8 :
slabdata 1 1 0
idr_layer_cache 83 87 136 29 1 : tunables 120 60 8 :
slabdata 3 3 0
buffer_head 21335 21375 52 75 1 : tunables 120 60 8 :
slabdata 285 285 0
mm_struct 50 88 704 11 2 : tunables 54 27 8 :
slabdata 8 8 0
vm_area_struct 1324 2565 88 45 1 : tunables 120 60 8 :
slabdata 57 57 0
fs_cache 50 183 64 61 1 : tunables 120 60 8 :
slabdata 3 3 0
files_cache 51 72 448 9 1 : tunables 54 27 8 :
slabdata 8 8 0
signal_cache 81 180 192 20 1 : tunables 120 60 8 :
slabdata 9 9 0
sighand_cache 72 72 1344 3 1 : tunables 24 12 8 :
slabdata 24 24 0
task_struct 77 85 1408 5 2 : tunables 24 12 8 :
slabdata 17 17 0
anon_vma 512 1356 16 226 1 : tunables 120 60 8 :
slabdata 6 6 0
pgd 50 238 32 119 1 : tunables 120 60 8 :
slabdata 2 2 0
pmd 108 108 4096 1 1 : tunables 24 12 8 :
slabdata 108 108 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 :
slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 :
slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 :
slabdata 0 0 0
size-65536 2 2 65536 1 16 : tunables 8 4 0 :
slabdata 2 2 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 :
slabdata 0 0 0
size-32768 5 5 32768 1 8 : tunables 8 4 0 :
slabdata 5 5 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 :
slabdata 0 0 0
size-16384 1 1 16384 1 4 : tunables 8 4 0 :
slabdata 1 1 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 :
slabdata 0 0 0
size-8192 9 9 8192 1 2 : tunables 8 4 0 :
slabdata 9 9 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 :
slabdata 0 0 0
size-4096 92 93 4096 1 1 : tunables 24 12 8 :
slabdata 92 93 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 :
slabdata 0 0 0
size-2048 493 498 2048 2 1 : tunables 24 12 8 :
slabdata 249 249 0
size-1620(DMA) 0 0 1664 4 2 : tunables 24 12 8 :
slabdata 0 0 0
size-1620 23 24 1664 4 2 : tunables 24 12 8 :
slabdata 6 6 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 :
slabdata 0 0 0
size-1024 171 184 1024 4 1 : tunables 54 27 8 :
slabdata 46 46 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 :
slabdata 0 0 0
size-512 301 1144 512 8 1 : tunables 54 27 8 :
slabdata 143 143 6
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 :
slabdata 0 0 0
size-256 252 870 256 15 1 : tunables 120 60 8 :
slabdata 58 58 0
size-128(DMA) 0 0 128 31 1 : tunables 120 60 8 :
slabdata 0 0 0
size-128 1234 2232 128 31 1 : tunables 120 60 8 :
slabdata 72 72 0
size-64(DMA) 0 0 64 61 1 : tunables 120 60 8 :
slabdata 0 0 0
size-64 86926 87047 64 61 1 : tunables 120 60 8 :
slabdata 1427 1427 0
size-32(DMA) 0 0 32 119 1 : tunables 120 60 8 :
slabdata 0 0 0
size-32 2753 4879 32 119 1 : tunables 120 60 8 :
slabdata 41 41 0
kmem_cache 150 150 256 15 1 : tunables 120 60 8 :
slabdata 10 10 0
> Care to show your lighttpd configuration?
Very basic... The site we are preping for it mostly static too, with fastcgi
for PHP. I'll show the info that's important for performance:
server.max-fds = 2048
server.max-keep-alive-requests = 32
server.max-keep-alive-idle=5
> > If it's something with the connection tracking overflow you'll see it in
> > your kernel logs.
>
> No message on the LB when this happens.
> Could you share the socket states on the RS during both runs? Also the
> ipvsadm -L -n -c output in the middle of the run?
With iptables enabled.
[root@loadb1 ha.d]# ipvsadm -L -n -c | wc
27724 166341 2162413
[root@loadb1 ha.d]# ipvsadm -L -n -c | grep "ESTABLISHED" | wc
27719 166314 2162082
I'm not sure the firewall is the issue and could be the client machine. As I
just ran ab with iptables disabled and it still gave me the error. Iptables
is enabled on the client test machine.
-L
--
Larry Ludwig
Empowering Media
1-866-792-0489 x600
Have you visited our customer service blog?
http://www.supportem.com/blog/
|