Paid Advertising
web application security lab

Click Circumvention For CSRF

I got an interesting email from devloop today about a way to circumvent certain forms of CSRF protection. I’ve actually talked about this before but I never had a concrete example to show off. The basic premise is that if you have a two step process and the session is stored in a cookie and not on the page, you don’t need XSS to bypass it, you only need a series of CSRF (or same site request forgeries) to initiate the attack. Here is devloop’s email (modified for formatting):

Hi RSnake !
I’m one of your reader and also the developper of a web application vulnarability scanner called Wapiti ( ) Got my blog at (french)

I wanted to tell you about a Cross Site Request Forgery I found on the website. This website allows users to create and join groups.

I study a little how the inscription to a group work and I was looking like a simple url will make the victim join to a group.

I created a group called “CSRF” ( ) Tipically a user that want to join a group must go to the group homepage, click the “Join this group” link to go to the page Then the user must confirm the inscription clicking on a submit bouton with an empty <form> with action=

At this time I was thinking that users will join the CSRF group just by going to this page. But got a kind of CRSF protection. Users were asked for a new validation, this time goin to

If the user go to first, he will be redirected to

The solution was to make sure users go to the two urls. So I posted a news on the forum and in my journal section with to <img> linking to the urls [img][/img] [img][/img] ( allows some BB code)

I started with one unique member for the group (me) and now I got 14 users in the group :)

I hope to get more users soon, as friends of the victims may be be infected to just by looking at the CSRF group page.

Bye ! and keep posting on your blog.

This is exactly what I was talking about. In some applications that I’ve seen (but can’t post for NDA reasons) a type of click protection is used to prevent users from going from page to page. However, the cookie to allow the activity is stored upon user submission of the first CSRF.

That allows the attacker to force you to perform the action through a series of clicks, without actually needing the page to render (as it breaks inside of an IMG tag). All the attacker really needs is to get you to visit the page so you pick up the proper headers (the authorization token) so that the second request is sent as an authorized user. Lesson learned? The one time use authorization token to stop CSRF has to be something that the page emits, not just something that the server headers emit. And even then you have to make sure there is no XSS on the same host or on the same IP even so they can’t read it. Tricky.

24 Responses to “Click Circumvention For CSRF”

  1. Jungsonn Says:

    Nice example!

    i like the way he apply and confirms users to a group by just looking at the page, clever and neat. :)

  2. Soppena Says:

    Two GET’s in a row forming the attack ?
    What’s to spectacular about this ? :)

  3. RSnake Says:

    I wouldn’t say spectacular, Soppena, but it does show a real world example where I might have had a hard time articulating this issue in the past.

  4. dw1de Says:

    If there is a complicated business process behind the first click that requires time to execute this attack technique will still fail as both images render at (virtually) the same time. I have played with this on some of the business process capture software that I evaluate for my client. What you can do is chain images together using embedded script. Now if you can get this script to run you can do much more evil but it serves the purpose.

    The call to will only occur after the response has been fetched and processed as a broken image. This will force a proper timing sequence if needed.

  5. RSnake Says:

    I’ve spent exactly 2 seconds thinking about this so forgive me if I’m way off base here, but what would stop me from chaining a few 301 redirects together and making them nice and slow (since I own the logic that outputs the headers) and making it wait long enough for the duration to expire?

  6. dw1de Says:

    If there is a complicated business process behind the first click that requires time to execute this attack technique will still fail as both images render at (virtually) the same time. I have played with this on some of the business process capture software that I evaluate for my client. What you can do is chain images together using embedded script. Now if you can get this script to run you can do much more evil but it serves the purpose.

    <img id=”happyimage” src=”″ onError=”img = document.getElementById(’happyimage’); if (img.src==’′){img.src = ‘’;}”>

    The call to will only occur after the response has been fetched and processed as a broken image. This will force a proper timing sequence if needed.

  7. devloop Says: changed the code so the GET requests won’t be accepted anymore :

  8. RSnake Says:

    That was quick! Good for them! Looks like they’re still vulnerable to a series of post requests so hopefully no one ever visits a malicious off host site with XSS in it that can send multiple POSTs. But whatever, at least they’re quick about fixing it.

  9. RSnake Says:

    dw1de, I think we are saying the same thing, but going about it different ways. Yours requires XSS. Mine only requires one direct image link and one image link to an offsite server under my control that go through some very sloooooow redirections back to the victim site.

    Same concept, different technologies.

  10. maluc Says:

    yeah, if you need a time delay.. you can use two redirects and throttle the second one with a time delay..

    so put in <img src=><img src=> .. and make show2.php wait a few second before it returns header redirects to the second one

    cookies or tokens are pretty much required, for reliable flow control. that’s still assuming you don’t have a single XSS hole on the domain .. sticky business.

  11. dw1de Says:

    Rsnake, I agree we are both basically doing the same thing via different methods. And my bit crosses the fine line between CSRF and XSS. That said the reason I initially used this technique was that some more nasty XSS wasn’t possible for various reasons. Your attack would also require a website under your control which would make it very easy to shut down the attack once noticed.

    A cool Idea tho.

  12. maluc Says:

    Well there’s alot of ways around that.. for example, open redirects (302’s should work fine as well) .. sites with response splitting.. public redirectors like .. botnets .. etc.

    It really depends on the situation, and the scale of the attack - just gotta be creative

  13. RSnake Says:

    I was thinking the same thing. Just find a few redirections on a few super slow connections somewhere in another country. You just need to slow it down enough to get it to work. The absolute maximum you should force the wait is 8 seconds since that is the maximum consumers expect a page to load in, reasonably without getting anxious. I’m sure you could find several redirects that last 2-3 seconds a piece and chain them together.

  14. Stefan Esser Says:

    A clever (but complicated) application design can stop the power of XSS. It all breaks down to embedding all FORMs in iframes that all have different hostnames like:

    This makes application design a bit tricky (especially if you still want AJAX features) but If you implement it correctly you can have an XSS on your site and the attacker is still not able to submit forms with valid tokens through XSS. (Atleast not the FORMs that are not vulnerable to XSS)

  15. dreamscape Says:

    In addition to merely pointing out a poor CSRF implementation, this example also opens the door to the whole host of issues that exist and can creep up thanks to 3rd party cookies.

  16. RSnake Says:

    Stefan, that’s interesting, but it would kill any link value it might have from search engines if they couldn’t get back there, and if people couldn’t go there themselves. If it’s internal only, that might be okay though. However, if there are links to it, you can find the names of the links through XSS.

    So you have to be certain there is no XSS anywhere on any page that you can get to without a session ID. I have a feeling it would be overly complicated and wouldn’t buy you a whole lot unless you were already pretty sure you didn’t have XSS issues on any page that wasn’t protected in this way (and there would have to be some otherwise users couldn’t get there).

  17. Webdevelopment Blog » CSRF protections are not doomed by XSS Says:

    […] Today I was looking at rsnake’s blog where he described how visiting two URLs in a row bypassed a CSRF protection implemented by While it is funny that someone believes enforcing some kind of application flow control by URL checks can stop CSRF it was not what caught my attention. […]

  18. Stefan Esser Says:

    RSnake: Currently I have not much time, but I will create a demo in the future. I know that there are some really tricky things to overcome. It might be possible that my idea only works 100% when httpOnly cookies are supported… (but that would just be a reason more to force Mozilla to implement httpOnly cookies).

  19. RSnake Says:

    HttpOnly cookies can still be read by XMLHttpRequests on the local domain, so be careful not to rely on that alone. I’d love to see an example of this (WITH an XSS exploit on the parent domain as well as the sub domains that change) to see if we can force an CSRF (or same site request forgery as the case may be).

  20. maluc Says:

    in theory, i think it should work.. if implemented properly.

    Also be sure to avoid all double line returns on the form pages.

    It is quite a pain to code though, i wouldn’t want to have to do it.

  21. Stefan Esser Says:

    RSnake do you mean reading the headers via TRACE or do you mean reading the Set-Cookie headers in the reply.

    First one is a configuration problem that should of course be fixed and second one is a flaw in the httpOnly design. Set-Cookie headers for httpOnly cookies should not be readable by XMLHttpRequest. (Actually I don’t know why anyone should have any access to Set-Cookie headers from XMLHttpRequest…)

  22. RSnake Says:

    I was actually talking about the latter. TRACE really isn’t much of an issue and easy to fix. I’m really talking about XMLHTTPRequest can just read any old header they’d like. That’s bad, and makes HttpOnly fairly flawed. That’s my #1 complaint other than Firefox still doesn’t support it except for that unsupported HttpOnly plugin.

  23. Chris Shiflett Says:

    Using an IFrame to help protect against XSS is an idea I’ve been experimenting with, but focused on strengthening session security rather than as a CSRF protection. (These two ideas are related, I suppose.) I think it could be useful, but putting the session identifier in the DOM exposes it when there are XSS vulnerabilities. Let’s not make session hijacking easier in an effort to make CSRF harder.

    Also, if each IFrame is generated from the same source, XSS vulnerabilities in one probably means XSS vulnerabilities in the others as well, weakening (but not eliminating) the benefits of using unique subdomains for each user. Perhaps a unique subdomain per form would be easier to manage without losing all of the intrinsic value.

    In defense of httpOnly, Set-Cookie is an HTTP header, so that exposure doesn’t violate the basic tenet. It is dangerous, sure, but even if httpOnly did everything we wanted and was supported by all the major browsers, it wouldn’t be a complete solution (or even close to one) for a long time, because there will always be users who are very slow about upgrading their browser.

    Anyway, nice idea, Stefan. I think this is a useful avenue to explore.

  24. maluc Says:

    Well stefan did include both a session id and unique form id in the subdomain name.. quote:

    but you do raise an interesting point that the session id would be readable from an XSS in the DOM of the page that loads it. Using a redirect might be able to avoid that, like <iframe src=> which redirects to But a better solution might be to keep it as and just use a token

    taking that a step into the unnecessary.. it could be possible to make an entire site where every page has it’s own subdomain.., etc.. which has the side effect of not allowing you to use xmlhttp anywhere

    except maybe by using a communication proxy at (no subdomain) .. but i’m digressing and not even sure if that’s possible ^^