|
Hello,
On Thu, 30 Apr 2026, Simon Horman wrote:
> From: 'Simon Horman' <horms@xxxxxxxxxx>
>
> This is an AI-generated review of your patch. The human sending this
> email has considered the AI review valid, or at least plausible.
> Full review at: https://sashiko.dev
> ---
> ipvs: fix shift-out-of-bounds in ip_vs_rht_desired_size
>
> This commit addresses a shift-out-of-bounds issue when roundup_pow_of_two()
> is called with 0 in ip_vs_rht_desired_size. It updates the assignment to fall
> back to 1 if n is 0.
>
> > diff --git a/net/netfilter/ipvs/ip_vs_core.c
> > b/net/netfilter/ipvs/ip_vs_core.c
> > index f5b7a2047291..ea450944465f 100644
> > --- a/net/netfilter/ipvs/ip_vs_core.c
> > +++ b/net/netfilter/ipvs/ip_vs_core.c
> > @@ -237,7 +237,7 @@ int ip_vs_rht_desired_size(struct netns_ipvs *ipvs,
> > struct ip_vs_rht *t, int n,
> > {
> > if (!t)
> > return 1 << min_bits;
> > - n = roundup_pow_of_two(n);
> > + n = n ? roundup_pow_of_two(n) : 1;
To overflow it on 64-bit we need 600GB+ memory, the
conns are ~300 bytes.
In v2 I changed it to n > 0. To solve it completely
we may need to introduce some kind of "conn_max" sysctl,
now per-netns. It can help to limit the conns which are
created GFP_ATOMIC while for other structures we will switch
to GFP_KERNEL_ACCOUNT.
> The parameter n is a signed int. If n happens to be negative, it will evaluate
> to true and be passed to roundup_pow_of_two().
>
> Because roundup_pow_of_two() casts the input to unsigned long, a negative
> value like -1 becomes ULONG_MAX. This results in a shift exponent of 64,
> triggering a shift-out-of-bounds regression. Would it be safer to check
> for n <= 0 instead?
>
> Additionally, if n exceeds 1.07 billion (1 << 30) on systems with large
> amounts of RAM, roundup_pow_of_two(n) evaluates to 1UL << 31. Since n is
> a 32-bit signed int, assigning 1UL << 31 back to n overflows and results
> in a negative number.
>
> Since subsequent bounds checks use signed comparisons:
>
> > if (lfactor < 0) {
> > int factor = min(-lfactor, max_bits);
> >
> > n = min(n, 1 << (max_bits - factor));
>
> These checks will fail to constrain the size correctly.
>
> [ ... ]
>
> > /* Shrink but keep it n * 2 to prevent frequent resizing */
> > return clamp(n << 1, 1 << min_bits, 1 << max_bits);
> > }
>
> Eventually, the function returns clamp(n << 1, 1 << min_bits, 1 << max_bits).
> Shifting a negative number causes undefined behavior, and the clamp operation
> will force the hash table to its absolute minimum size.
>
> Could this cause millions of connections to be placed into a minimally sized
> hash table, causing severe collisions during RCU hash lookups?
All that is valid without conn/mem limits.
Regards
--
Julian Anastasov <ja@xxxxxx>
|