Hello,
On Tue, 1 Oct 2002, Roberto Nibali wrote:
> > 2.4, net/core/neighbour.c:neigh_alloc()
>
> Aha:
> unsigned long now = jiffies;
> if (tbl->entries > tbl->gc_thresh3 ||
> (tbl->entries > tbl->gc_thresh2 &&
> now - tbl->last_flush > 5*HZ)) {
> if (neigh_forced_gc(tbl) == 0 &&
> tbl->entries > tbl->gc_thresh3)
> return NULL;
> }
>
> Ok, this means we bail out, if:
> o the amount of routing entries becomes bigger than gc_thesh3
> o the amount of routing entries is bigger than gc_thresh2 and
> the last routing table flush was 5 seconds ago
> o if garbage collection of unused table entries fails and the
> amount of routing entries is still higher than gc_thresh3
Or more simply:
- do GC always if gc_thresh3 is reached or once per 5sec if
gc_thresh2 is reached
- if there is no progress after GC and gc_thresh3 is reached => fail
> Why do you need to check against gc_thresh3 if neigh_forced_gc(tbl) failed?
may be to allow allocation of entry if GC released at least
one entry
> We collect the garbage, if:
> o nobody refers to it
> o it is not p(r)ermanent
> o NEW and probably wrong (how do you detect this and how can you
> flood the neighbour table?)
no, we "flood" the network when retransmitting the
probes (if ping checks 1000 hosts => 1000 ARP probes/sec),
for incomplete entries GC guarantees at least 1 probe
> > Yes, the thresholds for the neighbour table are
> > not tuned according to the RAM. Of course, tuning the hash
> > size (NEIGH_HASHMASK) and/or the hash formula could be a good
> > idea.
>
> This would really be very simple? Could you do it? Or is there something
> that Alexey thought of when choosing such low gc_thresh3 numbers?
At least you can tune these thresholds. As for the hash
size, NEIGH_HASHMASK is 1F (32 rows), good for 128 entries, slow for
4096 entries.
> > The max_size for routing cache is not reached
>
> I don't think you could ever do that anyway.
Why not, one LVS box with many clients can reach its
max_size, may be you don't see it due to the elasticity and GC.
> Cheers,
> Roberto Nibali, ratz
Regards
--
Julian Anastasov <ja@xxxxxx>
|