Yes. Consider the largest server we use now, which has 256 HT servers with 4 NUMA nodes. Even that should not be a big problem. Yes, this would be perfect for me !
Hello, Here stopping BH is may be not so fatal if some CPUs are used for networking and others for workqueues. 268ms/64 => 4ms average. As the estimation with single work does not utilize many CPUs s
Hi Yunhong & Julian, any updates ? We've encountered the same problem. With lots of ipvs services plus many CPUs, it's easy to reproduce this issue. I have a simple script to reproduce: First add ma
Hello, When I saw the report first time, I thought on this issue and here is what I think: - single delayed work (slow to stop them if using many) - the delayed work walks say 64 lists with estimator
Hi Yunhong & Julian, any updates ? We've encountered the same problem. With lots of ipvs services plus many CPUs, it's easy to reproduce this issue. I have a simple script to reproduce: First add man
Hello, Using kernel thread is a good idea. For this to work, we can also remove the est_lock and to use RCU for est_list. The writers ip_vs_start_estimator() and ip_vs_stop_estimator() already run un
Hi, Simon & Julian, We noticed that on our kubernetes node utilizing IPVS, the estimation_timer() takes very long (>200sm as shown below). Such long delay on timer softirq causes long packet latency.