LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Linux kernel small forwarding perf (was: RE: Small packets handling

To: Alexandre Cassen <alexandre.cassen@xxxxxxxxxx>
Subject: Re: Linux kernel small forwarding perf (was: RE: Small packets handling : LVS-DR vs LVS-NAT)
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <ja@xxxxxx>
Date: Wed, 7 May 2003 00:54:04 +0300 (EEST)
        Hello,

On Tue, 6 May 2003, Alexandre Cassen wrote:

> But anyway the noisy linux kernel routing cache hashing table design
> completly slow down the fast forwarding performance... Try generate
> a true src/dst random traffic to linux and you will constat the nasty
> forwarding performance...
>
> Cause rt_cache hash table introduce massive hashing collisions when
> processing large number of small packets... This is why kernel like
> *BSD are using PATRICIA like design to perform radix tree lookup
> speed...
>
> hey julian if you read this thread, have you got any info on rt_cache
> enhancement ?... I tried some IBM RCU code to remove rt_cache
> read_lock to spin_lock but perf still very bad... Any valuables infos
> on rt_cache perf would be appreciate :)

        I see that 2.5.69 comes with new hash funcs and it seems
the routing uses them for its cache. As for the cache, there are
already many cached values that are really per src/dst (like mtu
and other TCP params). So, it is impossible to generalize these
cache entries (to reduce their number). May be some tricks can
help, eg. clone-on-update (create new cache entry that will hold
the updated values). But there is another problem: some callers
can hold pointer to such cache entry forever (TCP, UDP, LVS).
This cache entry should contain the actual values for this path.
It is different problem to create algorithm that can reuse cached
entries from the current ip rule+route implementation. The current
kernel implementation works for large routing tables but what happens
if we have just one default gateway? In theory we can use
one cache entry (that will match with netmasks) and more
entries if some paths start to update their metrics[].
Anyways, the routing rules are complex enough and it is
difficult to optimize the cache entries. IMO, adding netmasks
in struct rtable can be the first step, the problem is how to
determine their values, may be from the matched ip rule and route?
The ip rules are complex enough and I see many problems.
May be one should start with implementing different rules and
routes suitable for smarter caching and for other purposes
such as load balancing, failover, etc :)

> rt_cache design is the kernel forwarding performance bottleneck :/

        It is true but the things are very complex.

> Best regards,
> Alexandre

Regards

--
Julian Anastasov <ja@xxxxxx>

<Prev in Thread] Current Thread [Next in Thread>