LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: LVS performance bug

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: LVS performance bug
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Fri, 16 Mar 2007 09:14:11 +0100
Hi Mike,

Expect long delays in my answers ...

Ran it again and here's what I see. We currently have 8 Gigs of memory
installed. It doesn't appear from the "free" command that we ran

I'm interested in your hardware configuration (e.g. EMT64 or true 64bit), so could you please send along following output:

dmesg -s 100000
dmidecode
lspci -v
cat /proc/meminfo

completely out of memory. Heres what "free" and "slabinfo" say before
the test. This is sitting idle. [root@jackets-a upgrade]# free -m
             total       used       free     shared    buffers
cached
Mem:          8118       1508       6609          0        274
974
-/+ buffers/cache:        259       7858
Swap:         2000          0       2000
[root@jackets-a upgrade]#

Looks like the standard 4GB split, could you please also send your .config for the running kernel (probably /proc/config.gz)? Would it be possible for you to upload those files somewhere, so I can download and properly analyse them?

ip_vs_conn         39091  39105    256   15    1 : tunables  120   60
8 : slabdata   2607   2607      0
ip_fib_alias          18    113     32  113    1 : tunables  120   60
8 : slabdata      1      1      0
ip_fib_hash           18    113     32  113    1 : tunables  120   60
8 : slabdata      1      1      0

Hmm, thought 2.6.18 already had fib_trie ...

arp_cache             11     15    256   15    1 : tunables  120   60
8 : slabdata      1      1      0
RAW                    5      6    640    6    1 : tunables   54   27
8 : slabdata      1      1      0
UDP                   35     36    640    6    1 : tunables   54   27
8 : slabdata      6      6      0
tw_sock_TCP           81     90    128   30    1 : tunables  120   60
8 : slabdata      3      3      0
request_sock_TCP       8     59     64   59    1 : tunables  120   60
8 : slabdata      1      1      0
TCP                   55     60   1280    3    1 : tunables   24   12
8 : slabdata     20     20      0

Ok.

Then we ran the test again and cranked up the traffic. It took a few
minutes and then it happened again. IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.
IPVS: ip_vs_conn_new: no memory available.

You obviously ran out of kernel memory.

Heres what "free" and "slabinfo" say after the test. [root@jackets-a upgrade]# free
             total       used       free     shared    buffers
cached
Mem:       8313112    1371072    6942040          0      62200
334064
-/+ buffers/cache:     974808    7338304
Swap:      2048184          0    2048184

You've warmed up the cache, now let's cook dinner :).

ip_vs_conn        2774925 2774925    256   15    1 : tunables  120   60
8 : slabdata 184995 184995      0

If I'm not mistaken this is the 3G/1G split. Looks like you're using EMT64 with PAE (only 4GB DMA addressing, however this is none of your concern).

So the only thing I see shooting up higher in memory used is
buffers/cache used seems to grow. But in the slabinfo the ip_vs_conn
active objects grows fast. I watched it grow during the test from 39K
objects to over 2 million objects. Maybe something isn't being reset or
returned to the pool. We are running the OPS patch(one packet
scheduling) because we are using LVS for the udp service DNS. I'm sure
it treats connections differently than the regularly hashed connections
thing.
If you need anything else let me know. I have a reproducer now that
makes it happen regularly.

And that would be? testlvs?

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>