Cenzic 232 Patent
Paid Advertising
web application security lab

HTTP Longevity During DoS

One of the things I noticed early on in my testing of Slowloris was that not every server reacted like you’d expect it to. Some gave database errors - I’m assuming because the database connections had different limits than the HTTP server. Whatever the reason, it only seemed vaguely interesting at first, from a fingerprinting perspective. The other issue is that to “see” the issue I had to basically hit a race condition by connecting to the webserver with my browser right in the split second as the sockets were being freed but before the database could recover. Not exactly the best way to go about things.

Then I started thinking about HTTP pipelining works in browsers, and also how HTTP sessions can send more than one request over a single socket. So imagine this. A server is under a Slowloris attack and either prior to the attack or by re-purposing one of the existing sockets to send something like the following:

GET / HTTP/1.1
Host: www.whatever.com
User-Agent: Mozilla/4.0 …
Connection: Keep-Alive
Accept-Encoding: identity, *;q=0
Accept-Language: en
Range: bytes=0-10

On some web servers this will only send back a small amount of information because of the Range request header (the first ten bytes) which is awfully similar to a HEAD request, in terms of not wasting your bandwidth looking at something you don’t care about while you wait for the attack to ramp up. But more importantly the Keep-Alive will allow you to then send a second request a while later and then another one and so on…

What that means is that you can be the one person sitting at a very large table - you’ll have the website all to yourself. That’s because all the other sockets are tied up, so that no one else can use the site. With some re-programming Slowloris could be capable of that task, or a secondary program could be used to initiate and hold open a certain amount of sockets that you can use and re-use as you probe the site or use it in peace - because no one else will be on the site to bother you. It’s just another interesting side effect of a DoS that only denies the service to everyone - except you.

17 Responses to “HTTP Longevity During DoS”

  1. plunge Says:

    This certainly takes auction sniping to a new level.. :)

  2. Christian Folini Says:

    Hi RSnake, I see you are spinning your concept further. Luckily, there is MaxKeepAlive and KeepAliveTimeout, which give you some leverage in the defense of an Apache webserver against this flavor of the attack.

    Otherwise, mod_qos is the tool to control keepalive dynamically once things get stiff.

    What is intriguing here is the idea of the secondary program. I’m sure somebody will eventuall programm the “customer-is-king” firefox extension that will do just this.

  3. yunshu Says:

    hello, I don’t think this paper is useful.

    I think you use range field just for decreasing response bandwidth, the main attack parameter is keep-alive. If you want to use this field to keep a long tcp connection, I can tell you that the connection will be closed when your connection time equal the KeepAliveTimeout even if you continue sending http get request which contain keep-alive.

    of course, you create a new connection when one connection is closed, but it sucks, many years ago people used this method to attack.

  4. RSnake Says:

    @yunshu - right, but the Keep-Alive, is normally 5 minutes. I probably realistically only need a few seconds to switch over to another screen, launch Slowloris and switch back. You don’t need to keep it alive for an hour to do damage, just a few extra seconds.

  5. Jamie Jones Says:

    FreeBSD has kernel http_accept filters that hold all requests that aren’t completed. This functionality is about 10 years old.

    I tried this program on my modest server, first setting all the timeouts to the default, and the program had no effect.

    Only when I disabled accept filters did the program do as it is suppose to:

    http://www.freebsd.org/cgi/man.cgi?query=accf_http

    http://httpd.apache.org/docs/1.3/mod/core.html#acceptfilter

    http://httpd.apache.org/docs/2.2/mod/core.html#acceptfilter

    One in the eye for the Linux fanboys. :-)

  6. RSnake Says:

    @Jamie - try the -httpready flag on the tool. That completely bypasses accf_http (AKA HTTPReady).

  7. Jamie Jones Says:

    You need to modify your program to try and get around mod_security, which drops the connections:

    [Mon Jun 22 20:15:51 2009] [error] [client 66.148.74.42] mod_security: Access denied with code 406. Pattern match “!^$” at HEADER(”Content-Length”) [severity “EMERGENCY”] [hostname “www.thebgb.net”] [uri “/”]

  8. RSnake Says:

    @Jamie - I’ve spoken with Ivan Ristic about mod_security and he specifically said that it runs too late to protect against Slowloris. I think you’re seeing an unrelated block and attributing it to Slowloris. Or if not, you aren’t using a “normal” configuration and we should probably investigate further.

  9. Jamie Jones Says:

    Apologies. I’m totally stupid.

    Firstly, I don’t know HOW I missed the bit about http_accept_filters.

    Sure enough, it does freeze the apache server now.

    And I checked again - those mod_security messages are related to your program (the ip address is the address of my server itself, so nothing else would be doing it) - however, you’re right - mod_security doesn’t block them until I cancel the program, so yeah, mod_security does block and report them, but too late to stop your program from working.

    I got confused because the messages were coming straight away when I wasn’t using your special http_accept_filter rules.

    So a double wammy of stupidness. Sorry to waste your time

    Cheers,
    Jamie

  10. RSnake Says:

    @Jamie - it happens to the best of us. No worries.

  11. Mike Adewole Says:

    I posted the following on another blog (http://www.codexon.com/posts/defending-against-the-new-dos-tool-slowloris) and thought it may be of interest to readers here:

    Option #2 [limiting connections by ip address] is by far the best but it is still not strong enough as [you] described it. In addition to limiting connections per ip address, it is also very important that all requests from one ip address are read serially (i.e. one at a time) and not in parallel.

    To understand why, consider an attacker that controls a botnet with N hosts. If the attacker uses each host to attack a server that implements option #2 with M connections per host, then the attacker can busy-wait N * M http sessions on the server when the requests are read in parallel. But if the requests are read in series, then the attacker can only busy-wait N http sessions.

    When requests are queued per ip address and read in series, an attacker cannot consume more http sessions on the server than the number of hosts under the attacker’s control, thereby defeating the attack in the sense that the attack is no longer viable (because the attacker has to expend as much resources as he consumes on your server).

    Cheers
    Mike

  12. Picci Says:

    Mike,
    that would mean that on a natted network, you would have personX waiting for personY to load the page (FIFO)…
    On nets where 10k people have the same exit node, you would have 9,999 people pissed off waiting for the google logo to appear on their home page… not such a great idea…
    Bye,
    Picci

  13. Jeff Schroeder Says:

    So would an iptables rule to rate limit traffic to xxx number of connections per ip address per second stop this attack? Sure you’d have to bump it up a bit high to not kill proxies or a bunch of hosts behind NAT but you could bump up your apache children also.

    It works like a champ in stopping ssh brute forcers.

  14. Jamie Jones Says:

    Incidently, I was already using mod_limitipconn here, and whilst it helped some ‘caching proxy’ that was trying to grab every possible variant of pages on one of my forum each time one user logged in, it didn’t stop your script, because the thing relies on a request to tell if it’s a type of request that needs blocking or not, so made no impact whatsoever…

    Has anyone come up with a decent solution? All I can think of is limiting connections at the firewall, but that will annoy NAT users…

  15. Jamie Jones Says:

    I’m thinking of an extension to the HTTP_READY / http_accept_filter code that can be set to ‘hold on’ to connections when a certain threshold of connections from the same IP has been received…

    So, in addition to at present, that a connection is ‘held’ by the kernel until the request has been received, a connection will also be held if there are X number of connections already ‘passed’ to userland from the same ip. That would mean that any legitimate NAT users would just receive a delay rather than having the connection dropped. Would this be the way to go?

  16. Ph33r Says:

    Rsnake, I could Ddos very large list of security website using Slowloris after editing view stuff.. and another pickup tool written in python it redirects https to http, I’ve tried to work it out with Slowloris alone I got banned or ignored, so that pickup redirecting kit have done a good job with slowloris, and for example of those sites that were ddosed

    offensive-security.com
    securityfocus.com
    securiteam.com
    exploit-db.com

    And it didn’t take more then 5 seconds to hang each server,

    I just wanted to mention that to you

    Good job

  17. RSnake Says:

    @Ph33r - ouch… that’s quite a list!