LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Rate Limiting / Dynamically Deny Services to Abuses

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Rate Limiting / Dynamically Deny Services to Abuses
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Fri, 25 Jul 2003 17:13:42 +0200
Hi,

I was after some samples or practical suggestion in regard to Rate Limiting
and Dynamically Denying Services to abusers on a per VIP basis.

Ok.

http://www.linuxvirtualserver.org/docs/defense.html

This is a defense mechanism which is always unfair. You don't want that from what I can read.

Specifically, we are running web based competition entries (eg. type in your
three bar codes) out of our LVS cluster and want to limit those who might
construct "bots" to auto-enter. The application is structured so that you have
to click through multiple pages and enter a value that is represented in a
dynamically generated PNG.

Ok.

1. rate limit on each VIP (we can potentially do this at the firewall)

Hmm, you might need to use QoS or probably better would be to write a scheduler which uses the rate estimator in IPVS.

2. ban a source ip if it goes beyond a certain number 
"requests-per-time-interval"

A scheduler could do that for you, although I do not think this is a good idea.

3. dynamically take a vip offline if it goes beyond a certain number of
"requests-per-time-interval"

Quiescing the service should be enough, you don't want to put on a penalty on other people, you simple want to keep your maximum request -per-time rate.

3. toss all "illegal requests" - eg. codered, nimda etc.

That has nothing to do with LVS ;).

Perhaps a combination of iptables, QoS, SNORT etc. would do the job??

QoS is certainly suitable for 1). For 2) and 3) I think you would need to write a scheduler.

Any suggestions or pointers would be gratefully received.

Take care,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc

<Prev in Thread] Current Thread [Next in Thread>