Hi Wensong,
I see that the rate estimator currently is not used anywhere. From what
I understand it tries to give you a means to provide 1/s information on
packets, bytes and connections.
I wonder why it is done in such a complicated way? I know most of it is
from the net/sched/estimator.c but why not using the direct way of
computing the average?
Something like following should work:
avg = ((time - timeframe) * avg + new) / time
I've written a 2 minutes POC shell script that shows what I mean:
------------------------------------------------------------------
#!/bin/bash
# extremely stupid programme to show calculation of constant average
declare -i num=0
declare -i cnt=0
declare -i timeframe=2
declare avg=0
while read -p "Enter number: " num rest; do
if [ "${rest}" == "q" ]; then
exit
fi
cnt=$((${cnt}+${timeframe}))
avg=$(echo "((${cnt}-${timeframe})*${avg}+${num})/${cnt}" | bc -l)
printf "Average [cnt=%d]: %0.4f\n" "${cnt}" "${avg}"
done
------------------------------------------------------------------
Why are the numbers scaled by 2^5?
Would it make sense to use the ip_vs_estimator to write a scheduler
which chooses the RS with either the least amount of bps or the least
amount of pps?
I take it that the slow timer stuff is eliminated by now? :)
Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
|