Paid Advertising
web application security lab

Recursive Request DoS

As a follow-on to my post on hacking intranets by using the websites (not web browsers), it occurred to me there is a potential for a DoS situation with that technique. First let me explain how it wouldn’t work so that I can explain how it may work. If you see a URL that looks something like this:

You only get one iteration of the script calling itself. Of course you could chain them together and maybe get a few dozen requests out of one request. That’s fairly bad on system resources, but nowhere near as bad as it could be. Let’s take another example where once you submit a request it creates a session key, like so:

There are a few ways that that session key could be created. It could be based on time, it could be a counter, or it could be a hash of something. In the case of a hash you’re going to have a really hard time doing anything because you have to predict what a URL that contains a hash would be. But let’s say it’s something predictable like time or a counter, and that I could re-request the same URL over and over without caching. Maybe the key is stored in a DB and not flushed. Then there may be a situation where you could cause a recursive DoS condition.

If you knew the next request was going to end up being the key “1234567891″ and you could tell that request to point anywhere, you’d point it to the URL:

That would make the machine connect back to itself, which would make it connect back to itself and so-on. Each one would tie up system resources as well as keep the sockets open on the machine until they timed out. So a single request could end up forcing the web server to connect back to itself hundreds of times (probably a function of how slow the process was as well as max connections and timeout speed). That’s probably not too interesting and fairly uncommon, but it may be worth mentioning in case someone else can come up with something interesting there.

12 Responses to “Recursive Request DoS”

  1. hackathology Says:

    great idea!! However, i will have to test it live myself to actually see if the theory works on practical.

  2. buherator Says:

    I think the same happens if you trick a badly validated php script like this:

    The index.php is loaded via an include() again and again because the $page variable is never “forgotten”. And there are _many_ sites that are vulnerable this way!

  3. goodwinster Says:

    I guess most people would attempt to protect against the first type by simply checking the URL is for a resource outside the originating domain.

    In which case you could find 2 sites that are vulnerable and point them at each other.

  4. Awesome AnDrEw Says:

    This is a nice theory I didn’t even think about while I was investigating some open-source scripts I’ve found on another website.

  5. Awesome AnDrEw Says:

    So I’ve just investigated to see how well it worked with the script, and apparently it attempts to load the URL 5 times before it causes a 404 error.

  6. cheney_usa Says:

    Whenever I see a redirect url that is restricted to the same domain, I try this. I’ve found several over the past year during assessments.

    Always fun to watch.

  7. Ronald Says:

    Yeah you’ll need to time it out a little, just enough to finish the request that came before otherwise it just goes into a bottleneck, it actually looks like a SYN flood btw, maybe if you can mash this up with http response splitting you’ll have an evil app in yo hands ;)

  8. lordm Says:

    I’ve also seen this on quite a few assessments. I have not ever confirmed this as being a real denial of service, however. I certainly can imagine a case where XSS is stored on some prominent web app only to be carried out by every_browser that hits the site. So, I’ve reported this as a finding when I’ve located it as a threat.

    It seems to be fairly common with .NET apps that I’ve assessed as well as with sites that use crap like “SiteMinder.”

    See ya

  9. cheney_usa Says:

    One browser DoS I found was just giving the redirect page a blank url. Bad input validation and it just started requesting itself.

    Good times.

  10. Kyo Says:

    Interesting concept. I was once on a site that only let a user include a script on the same server (to protect itself from remote inclusion, obviously)

    it was like ?url=gallery.php
    So, I went to
    creating an infinite loop of including itself. This worked because the get variable “url” stayed “index.php” for every included page, thus including itself infinitely.

  11. zeroknock Says:

    Its there. even it is treated as URL concatenation. once can
    check third party URL redirection or by pointing the recursive
    query to handle unsolicit URL.

  12. ungenau Says:

    You can get an infinite loop with services like snipurl. Of course, it is not the server itself that loops, but the browser: