Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*\[RFC\s+PATCHv6\s+4\/7\]\s+ipvs\:\s+use\s+kthreads\s+for\s+stats\s+estimation\s*$/: 5 ]

Total 5 documents matching your query.

1. Re: [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation (score: 1)
Author: Jiri Wiesner <jwiesner@xxxxxxx>
Date: Mon, 21 Nov 2022 17:05:27 +0100
Tested-by: Jiri Wiesner <jwiesner@xxxxxxx> Reviewed-by: Jiri Wiesner <jwiesner@xxxxxxx> Not an rlimit anymore. To avoid the magic number - 4, a symbolic constant could be used. The 4 is related to th
/html/lvs-devel/2022-11/msg00049.html (13,450 bytes)

2. Re: [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation (score: 1)
Author: Jiri Wiesner <jwiesner@xxxxxxx>
Date: Fri, 11 Nov 2022 18:21:36 +0100
Yes, a cache factor of 4 happens to be a good compromise on this particular Zen 1 machine. I am not sure if I copied the right messages from the log. Probably not. Absolutely. -- Jiri Wiesner SUSE La
/html/lvs-devel/2022-11/msg00037.html (13,115 bytes)

3. Re: [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Thu, 10 Nov 2022 22:16:24 +0200 (EET)
Hello, So, 12 tests and 3 20ms gaps eliminate any cpufreq issues in most of the cases and we do not see small chain_max value. Looks like cache_factor of 4 is good both to ondemand which prefers cach
/html/lvs-devel/2022-11/msg00036.html (17,436 bytes)

4. Re: [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation (score: 1)
Author: Jiri Wiesner <jwiesner@xxxxxxx>
Date: Thu, 10 Nov 2022 16:39:15 +0100
I was testing the stability of chain_max: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz 64 CPUs, 2 NUMA nodes Chain_max was often 30-something. Chain_max was never below 30, which was observed earlier whe
/html/lvs-devel/2022-11/msg00035.html (18,185 bytes)

5. [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation (score: 1)
Author: Julian Anastasov <ja@xxxxxx>
Date: Mon, 31 Oct 2022 16:56:44 +0200
Estimating all entries in single list in timer context causes large latency with multiple rules. Spread the estimator structures in multiple chains and use kthread(s) for the estimation. Every chain
/html/lvs-devel/2022-10/msg00055.html (61,002 bytes)


This search system is powered by Namazu