>> ip_dst_cache 3074 6980 192 254 349 1 : 252 126
>> arp_cache 1044 1170 128 39 39 1 : 252 126
>Aha, might get full soon.
The way I have things set up, I am running up to 8 virtual IPs for each
realserver for each VIP. This means:
53 VIPs
8 services/VIP
6 realservers
53*8*6 = 2688 RIPs!
Maybe this wasn't such a good idea, could that be the cause of the arp_cache
issue?
>> inode_cache 114100 114100 512 16300 16300 1 : 124 62
>> dentry_cache 116370 116370 128 3879 3879 1 : 252 126
>Jeez' what the hell are you running on this box?
MON is check each one of those 2688 RIPs for the service that is bound to
them. Again, maybe not such a good idea.
>> 192.168.0.128 sent an invalid ICMP error to a broadcast.
>> 192.168.0.128 sent an invalid ICMP error to a broadcast.
>> Neighbour table overflow.
>> Neighbour table overflow.
>> Neighbour table overflow.
>
>Ok, try following
>
>echo "4096" > /proc/sys/net/ipv4/neigh/default/gc_thresh3
>
>and try to ping again and check dmesg.
I'll try that next time and see what happens. If that doesn't work I'll
try: "ip route flush cache"
>> realservers, so you can imagine the spam that comes into our network.
>
> I hope you know how to start proper countermeasures against these
'attacks'
I'm always open to creative solutions!!! Unfortuntitely I'm using qmail as
the MTU, qmail doesn't check to see of a user exist locally before adding
the message to the queue. I do lots of creative things once the message is
in the queue to purge them quickly, but we still get hit pretty hard. The
iptables chains we were using was one method we tried to help. Basically we
listened for the rate of "rcpt to" in the packets hitting our servers. If
one IP or one network sent >N mails in X seconds they would be added to the
list.
|