Paid Advertising
web application security lab

Google Gadgets Gaffe

To further illuminate the problems with Google Gadgets that Tom Stripling spoke about at the OWASP conference, I asked him to type up the details so that we could all take a look at it. I think this is a fairly thorough writeup. Obviously there is some more work to be done here, but ultimately, I think this proves the point:

First of all, here is the Google documentation on inlining, if you’re

My original goal was to CSRF a module onto someone’s page, then run another CSRF attack that inlined it, and then go to town. Google has thwarted my early efforts, but I’m not convinced that it isn’t possible.

Google has a parameter called “_et” that is set to a random value and required on every change to the iGoogle page. Without this value, you can’t submit a valid request to load a module, so it prevents basic CSRF.

It turns out that the parameter also shows up on the domain for certain “approved” gadgets. So, I was going to steal this parameter with AJAX and use it force a module on someone’s page.

The initial attack would involve someone following a link to the page you identified that allows XSS on A link like this:

This gadget (which is never meant to actually end up on someone’s page) loads an AJAX request to get the page for another (approved) gadget, steals the _et param and tries to submit a request to that loads the gadget.

It doesn’t work. I had to cut off my testing in the middle because I realized I probably needed to create slides if I was going to present this stuff, but I did notice that the _et param I was stealing was different than the one on my iGoogle home page, so it may be that Google has thought of this and is preventing it by using different _et values for the different domains. It may also be that I have a bug in my JavaScript somewhere. I will be looking into this more as soon as I have time (but I have no idea when that will be).

So right now, I still can’t cross over from to without user interaction. Still, the user interaction is pretty weak. They provide a preview page that will load a module with one click:

And then another click to inline the module. By the way, don’t load that on your real google account. It actually does send your cookies offsite. Feel free to download, play with, or publish any of these gadgets. Here’s another one that just does a basic phishing attack:

That’s the basics of where I’ve gotten so far. I’ll keep you posted if I figure out that _et issue.

So yes, in theory, anything sensitive you have on Google is once again at risk. This is based off the original hole discussed where Google felt the hole was intended behavior. No apology needed, Google. :) Great work by Tom Stripling!

18 Responses to “Google Gadgets Gaffe”

  1. euronymous Says:


    interesting news…

    If I think that I’ve opened my blog on blogspot just because I was searching a secure blogging hosting ::):)

    quite funny

  2. tx Says:

    That _et variable is exactly what stopped me, here:,17129
    (I mistakenly referred to the non-existant _en variable in my post.)
    Some of what I found:
    It seems to actually be tied to ip at some level.
    I hadn’t been able to get XHR functioning, but you can make the request using _IG_FetchContent, only problem with doing that is that it proxies the request so the et value returned won’t be valid.
    Some code to play with:

    var url = ‘’;
    _IG_FetchContent(url, function (responseText) {
    var regexString = ‘_et=”([0-9a-zA-Z]+)”;’;

  3. tx Says:

    Some additional notes:
    The et token is tied to the users browser, it is not tied to any accounts. so as long as googles cookies are present, any user who logs in using the same browser can be exploited using the same value for et.

  4. QPony Says:

    This is the message that Google displays:
    “Module requires inlining. Inline modules can alter other parts of the page, and could give its author access to information including your Google cookies and preference settings for other modules. Click OK if you trust this module’s author or delete to remove this module.”

    If users click through the warning, then it can be abused. It might be an interesting result if you can automatically skip the inlining warning.

    I don’t think the warning message conveys the threat very well. Most users don’t understand that “access to…your Google cookie and preference settings” means “access to all your Gmail, home address, phone number, search history, etc”. Google should change this to a more dire, clear warning.

    It’ll be interesting to see what happens with Caja with respect to these third-party gadgets.

  5. Dave-san Says:


    Nice work. I fear that this will become truly evil “intended behavior.”

  6. kuza55 Says:

    Maybe I’m just jaded, but to me this seems to be something along the lines “well, I thought I had something, but it turns out Google uses a a framework on their sites to stop CSRF, so umm, I’ll say something about it being easy to trick users”.

  7. RSnake Says:

    @kuza55 - I don’t think that’s an accurate assessment. While unlikely, it is very possible for people to subvert clicks using the onmousemove hover trick. What’s more interesting is that it is possible in any way at all to put my content on Google (something they said shouldn’t be possible). It doesn’t matter if it takes one extra step or not, it’s un-trusted content. The point is the same, whether it’s vaguely hard or stupidly simple.

  8. kuza55 Says:

    I know that we can subvert clicks (and was actually wondering if anyone would raise that issue…), my main point was just that most of that email was cruft about how he figured out that Google wasn’t entirely stupid.

  9. Tom Stripling Says:

    I’d agree with that. They’re not *entirely* stupid. So let’s assume that it actually isn’t possible to force a malicious gadget onto someone’s page via CSRF, subverted clicks, or some other mechanism, just for the sake of argument. (I have no idea whether this is true because I haven’t yet finished looking at it.) What kind of risk would still exist?

    What you have here is JavaScript code that is uploaded by someone you don’t trust, can change at any time, could have access to every piece of information you store with Google, and the only thing that is protecting you is your ability not to click “OK” when the module asks if you’d like to run it. Maybe you wouldn’t click OK, but I think most users would. In fact, if I were really attacking this design, I would create a legitimate gadget and wait for thousands of users to start using it on their homepages. Then I would change it out from underneath them. It’s likely that no one would notice a thing.

    That’s the real issue. Regardless of how the gadget gets there, Google has created a design that allows potentially malicious users to put JavaScript on the pages of other users. That’s a flawed design in my book. Actually, it sounds a lot like the definition of XSS.

  10. kuza55 Says:


    I completely agree. There’s not much more that I can say.

    When they existed only on the gmodules page I argued that Google couldn’t really do much more, but here I completely agree. This is completely irresponsible from Google, and they should be held to account here, because as you said most users won’t notice it.

    Now, given that Google aren’t going to listen to what we say, if we say the design is completely flawed and they should scrap it, do you think there is anything we can recommend they do to mitigate this?

    Should Google provide an API which allows the objects to tell google where to position them, and simply have them encapsulated in iframes, and give them an API interface to get data about you from google if you want to give it to the app, e.g. your contact details, so that they *can* do everything, but it all needs your consent? Any other ideas?

  11. beford Says:

    There is a XSS flaw on one of the links provided by Tom.

    They don’t check for danger values on the screenshot attribute and just put in on the page.

  12. beford Says:

    Forgot to add, the XSS only affects IE6.

  13. Tom Stripling Says:


    Nice work! I figured there had to be one, but I didn’t have enough time to look for it.


    That’s a good question. Having an API that allows you to strictly control the end-result JavaScript is the best solution, IMHO. There have recently been efforts to create a type of whitelist input validation for dynamic content like HTML and CSS (e.g. the OWASP AntiSamy project:, but I think most of those efforts are focused on preventing *all* JavaScript from running. Developing a validation routine that selectively allows certain JavaScript but prevents dangerous attacks sounds like an intractable problem to me. And in any case, it’s never a good idea to rely on input validation alone if you can help it.

    But let’s be clear here. Google almost certainly knew the risks associated with their design when they built it. They accepted them because the business needs for the application trumped the security requirements. That’s the real problem. It bothers me that companies who handle sensitive data still think of basic security requirements as optional. I actually like Google a lot, but I don’t trust them with my data any more.

  14. MustLive Says:


    Nice XSS hole at Google ;-)

  15. Vinicius K-Max Says:


  16. Arshan Dabirsiaghi Says:

    As Tom pointed out, allowing certain kinds of JavaScript is an intractable problem. If you have a small whitelist of “valid” JavaScript functions the user can execute, you can include that in your AntiSamy configuration file as a literal-list value.

    However, with that type of validation you’re starting to get into a situation where a “creative’ user might be able to do something bad.

  17. kuza55 Says:

    I didn’t mean a javascript whitelist.

    I meant keeping the modules on the appropriate gmodules subdomain, embeding an iframe on the iGoogle interface and giving them an API to access the data they need or move the iframe around, or whatever they want. This API would be a call to a URL rather than a javascript function, e.g. the script makes a request to and gets the contact details back in XML if the user has allowed that module access, and gets an error otherwise, same goes for if they want to resize their iframe on igOogle or something, they make a request to and if the user has allowed it, then the change is applied and next time the iGoogle page is created, the module is resized.

  18. Dicipulus Says:

    I guess this is a “added feature” that google intended as well???

    “Google’s Orkut Social Network Hacked
    Hundreds of thousands of users infected by XSS worm hidden in messages from ‘friends’