hello The problem is we can NOT decide what the customers do, many of them run kubernetes with old versions of kube-proxy. And most importantly, upgrade to new version is a very long and painful proc
Hello, My concern is with the behaviour people expect from each sysctl var: conn_reuse_mode decides if port reuse is considered for rescheduling and expire_nodest_conn should have priority only for u
Julian what we want is if RS weight is 0, then no new connections should be served even if conn_reuse_mode is 0, just as commit dc7b3eb900aa ("ipvs: Fix reuse connection if real server is dead") tryi
thanks Julian yes, I know that the one-second delay issue has been fixed by commit f0a5e4d7a594e0fe237d3dfafb069bb82f80f42f if we set conn_reuse_mode to 1 BUT it's still NOT what we expected with sys
Hello, Yes, this is expected when conn_reuse_mode=0. What happens if you try conn_reuse_mode=1? The one-second delay in previous kernels should be corrected with ipvs: allow connection reuse for unco
thanks julian What happens in this situation is that if we set the wait of the realserver to 0 and do NOT remove the weight zero realserver with sysctl settings (conn_reuse_mode == 0 && expire_nodest
Hello, Your change does not look correct to me. At the time expire_nodest_conn was created, it was not checked when weight is 0. At different places different terms are used but in short, we have two
Since commit dc7b3eb900aa ("ipvs: Fix reuse connection if real server is dead"), new connections to dead servers are redistributed immediately to new servers. Then commit d752c3645717 ("ipvs: allow r