LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Connection table hashing

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Connection table hashing
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Thu, 19 Jun 2003 03:12:44 +0200
Hi Julian,

        I'm not against this but it is problematic. If our
goal is to test different hash funcs then we need the raw data,
the distribution according to the default hash funcs is of no
help because we can deduce it based on the CIP:CPORT (for each
hash func we have). Or you have something else in mind? If you

You're right, but getting it from /proc/net/ip_vs_conn is only giving us a snapshot situation. We would need continouus snapshots to see hash collisions and distribution in relation to time and template entries.

have better ideas please go ahead, I just wanted to make
comparison without any kernel patching. You extract the data
once and then compare all funcs with same data.

Sure. If I have a better idea I let you know but I realize that we couldn't ship LVS with my suggestion as code ;).

        mix was designed for 2-way but there are so many shifts.
The modern CPUs multiply faster.

Yes. BTW, have you noticed that inlining only improves runtime for the lvs hash method when using gcc 3.x?

        BTW, I suspect our implementation in DH and the old tests with
2654435761. May be it is more correct to get the highest bits (as
in hashlvs-0.2)?

I think only if the hash size is small but for 18 bits it should not matter too much. What I was a bit surprised about is why we use the proto in the current LVS hash, because it can only be 0x0001 or 0x0011. This doesn't add anything to the distribution per se if you do

proto^addrh^(addrh>>bits)^ntohs(port) & mask

and might explain the high bits approach, as the lowest bits are most of the time populated.

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc

<Prev in Thread] Current Thread [Next in Thread>