Hello,
> > I was using eepro100 driver and successfully changed to intel e100
> > driver. Using this driver gives the opportunity to use some specific
> > features called "CPU Cycle Saver": the adapter does not generate an
> > interrupt for every frame it receives. Instead, it waits until it
> > receives N frames before generating an interrupt.
> > As this LVS setup is mainly handling small packets, I tried
> > different values for N and noticed that it can push back limitations.
> > At least, it can now sustain 4000 inbound/4000 outbound packets/s.
>
> I have done some tests recently with the stock 2.4.20 eepro100 driver and
> Intel dual-port cards. I was able to get 33,000 interrupts/sec with or
> without iptables rules. I was going to try the "NAPI" type driver like you
> apparently have done but I didn't see the need to yet. I was using VALinux
> 2230's (dual 733mhz) for this test. With large packets (L2 - 1450) load was
> ~0 and network saturation was reached. With smaller packets (L2 - 200) load
> was ~.1 and burst to ~.3 occasionally...
>
> I didn't try the e100 driver, the main reason was because I have heard so
> many mixed reports; nothing really positive! Maybe you'd like to try NAPI
> for eepro100 - ftp://robur.slu.se/pub/Linux/net-development/NAPI/. (Note :
> NAPI doesn't seem to be updated for 2.4.20). You might also want to try
> Donald Becker's eepro100 (http://www.scyld.com).
Linux for small packet handling is very bad :/... To increase NIC performance
and throughput, there is kernel design. NAPI offer a design to reduce interrupt
per sec... Using NAPI in conjonction with a technic called socket kernel
buffer recycling... can make long discussion here... there is other driver
modification that can enhance perf...
But anyway the noisy linux kernel routing cache hashing table design
completly slow down the fast forwarding performance... Try generate
a true src/dst random traffic to linux and you will constat the nasty
forwarding performance...
Cause rt_cache hash table introduce massive hashing collisions when
processing large number of small packets... This is why kernel like
*BSD are using PATRICIA like design to perform radix tree lookup
speed...
hey julian if you read this thread, have you got any info on rt_cache
enhancement ?... I tried some IBM RCU code to remove rt_cache
read_lock to spin_lock but perf still very bad... Any valuables infos
on rt_cache perf would be appreciate :)
rt_cache design is the kernel forwarding performance bottleneck :/
Best regards,
Alexandre
|