Hello Jeremy,
Don't think so, I'm running dmesg > dmesg.1-N So far diffs are showing them
to be all the same.
And all this without your cronjob doing the eth2 down/up, right? Or let
me ask in a different way. Do you think that your problem seems to be
solved so far until it will show up again?
Can you add some more please?
ip -o -s route show cache | wc -l
root@director:~# ip -o -s route show cache | wc -l
19343
Ok, let's cut down the length of the output a bit. I only need following
information in future (Apology to Joe, I forgot about your connection
for a second):
ip -o -s route show cache | wc -l
grep cache /proc/slabinfo
I'm using a heartbeat script to bring up all the IPs (it sends out arps as
it brings them up so the router knows which director to route to)
That's what I asked your in my first email: If you were setting the load
balancer up with any artificial tool or if you do it by hand :).
/etc/ha.d/resource.d/IPaddr 216.163.120.45 start
/etc/ha.d/resource.d/IPaddr 216.163.120.46 start
I don't know heartbeat, sorry.
I tried adding a /32 to the IP address on my development box, but it didn't
like it. I'll take a look at the script to figure out how to do it
correctly.
Yes, this would be a good idea.
Okay, I ran it and added it to my rc.local.
Very good. But maybe if you set the correct bitmask the problem will be
solved too.
default via 216.163.120.1 dev eth0 metric 1
Yes, this seems to be the real problem if I come to think about it. You
have defined your addresses to be /24 and the DGW is within the scope of
the ip addresses for eth0 and thus this will 'confuse' the kernel.
ip -o -s route show cache | wc -l
11206
Ok, so 6000+ entries expired. How much time did pass between the first
and the second call of this command and did you run 'ip route flush
cache' in between?
Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
|