Sorry for my Swenglish :)
It wasn't necessarily your Swenglish that got me confused but rather the brevity
of your question :).
[OT: English was my 4th language, so I'd be off the track to even try to
criticise your Swenglish.]
We are using Gb- Ethernet and the backend servers delivers 2Gb/s to the clients.
I have two DL360:s with 2GB of memory each as lvs-machines.
Nice, I'd like to get some numbers if possible. Are you using iptables too? If
so in what way. What is your LVS method? LVS-DR I presume.
I have deployed it at a customer sight which offers ftp,http and rsyc services.
They calculated that they will need 2^21 entries in the hash if it is supported.
Let me see (4 secs session coherency and 1/8 of the traffic are valid SYN
requests matching the template):
ratz@zar:~ > echo "l(4*2*1024*1024*1024/8)" | bc -l
20.79441541679835928251
ratz@zar:~ >
So yes, this would roughly be 21 bits. But now I ask you to read the nice
explanation of Horms in this thread on why you do not need to increase the
bucket size of the hash table to be able to hold 2**21 entries. You can
perfectly well use 17 bits which would give you a linked list depth (provided we
have an equilibrium in distribution over the buckets):
ratz@zar:~ > echo "2^21/2^17" | bc -l
16.00000000000000000000
ratz@zar:~ >
So lousy 16 entries for one bucket when using 17 bit. This is bloody _nothing_.
Let's take the worst case: You'd have maybe 32 entries which is still _nothing_.
Your CPU doesn't even fully awake to find an entry in this list :).
The amount of RAM you need to hold 2^21 templates entries for a session time of
4 seconds is roughly:
ratz@zar:~ > echo "4*(128*2^21)/1024/1024" | bc -l
1024.00000000000000000000
ratz@zar:~ >
1GB. So you're on the safe end. However if you plan on using persistency you'd
run out of memory pretty soonish.
HTH and best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
|