Paid Advertising
web application security lab

XSS Hole In Google Apps Is “Expected Behavior”

You know, just when I think I’m being a super nice guy, and I go out of my way to go through responsible disclosure, I am slapped in the face with the exact reason why I don’t think responsible disclosure works for some companies. Certain companies I have worked with are ultra responsive, understand risks, and do their absolute best to combat anything that may be used to harm them or their consumers. Then there’s Google:

Hi RSnake,

On further review, it turns out that this is not a bug, but instead the expected behavior of this domain. Javascript is a supported part of Google modules, as seen, for example, here: Since these modules reside on the domain instead of the Google domain, cross-domain protection stops them from being used to steal Google-specific cookies, etc. If you do find a way of executing this code from the context of a domain, though, please let us know.

If I misunderstood the report in any way, please don’t hesitate to correct me. For the moment, though, I’m closing this issue. Thanks for sending this over.

The Google Team

BZZZT! Wrong answer. Thank you for playing though. On further review, Google needs to figure out what XSS is used for - it’s not just for credential theft. You couldn’t make this stuff up if you tried. Putting phishing sites on is apparently expected behavior. My favorite part of this email is where Google explains how cross domain policies work. I’m simply not impressed. Click here to see the XSS hole. I’ll let the JavaScript injected on Google branded domains do the talking for me.

So for anyone interested in exploiting this non-bug, they would tell people to add their own modules, which are hijacked, of course, allowing them to take over other people’s websites when they embedded the erroneous third party code. Kinda nasty. Unlikely, but nasty. More likely it would simply be in phishing sites that didn’t want their sites taken down, but wanted Google’s to be taken down instead. For the record, this is not the first time I have responsibly disclosed issues to Google, and this is the third time they have said what I reported was either not a bug or too hard to fix. So much for using responsible disclosure with Google. Ugh.

57 Responses to “XSS Hole In Google Apps Is “Expected Behavior””

  1. sirdarckcat Says:


    Some modules actually need tags to work, so there is nothing google can do to stop this..

    Any way, google has a lot of “other” domains that doesn’t do any kind of validation.

    For example: (create a group, and then upload an HTML file) (create a project, and then upload an HTML file)

    or even.. ( allows you to input )

    I didn’t had a “very good” experience recently, with the google team.. after they corrected a XSS flaw I reported them at they stopped responding my e-mails.. I didin’t receive a thankyou (I think I’ll make public the Google Analytics vulnerability), any way, in the past, responsable disclosure gived me good results.. but with google.. “not so much”…

    I’ll think about going to “the dark side” haha

  2. Chris Clark Says:

    The standard practice for this type of functionality is to place the content in a separate domain and Google has followed that practice correctly. The domain may owned by Google domain but I am not sure that, as a domain name, is directly recognizable as a Google owned domain without performing a WHOIS lookup. While there is some risk here, in my opinion, the appropriate steps have been taken to mitigate the risk while providing developer functionality.

  3. Antonio Says:

    I told Google about an XSS problem a couple years ago. I didn’t make any sort of public announcement. They had a patch up within hours and sent me a t-shirt. It was an XSS issue right off the main landing page though.

  4. RSnake Says:

    @sirdarckcat - Exactly. Designing your site to have these issues in them makes them way harder to fix in the long run. Ultimately the people who are going to get hurt are the consumers, not Google.

    @Chris Clark - tell that to the people who get phished. People who get phished don’t look at the URL. But even if they did, with the google branding looks real enough - because it is real. I’m not sure Google cares about the consumers though. You’re absolutely right, Google has mitigated their own risk nicely. I just frankly don’t care about Google’s risk. I care about the consumer.

    @Antonio - Oh, I didn’t say they haven’t fixed issues. They have. But this is clearly not a bug - they even said so. I have no ethical problem posting things that aren’t bugs. But as a point of reference, they’ve patched some of my bugs within hours. Others are going on three years now with no ETA in sight.

  5. Chris Clark Says:

    Branding is easily duplicated, all phishing is basically duplication of branding with a bit of social engineering thrown in. Phishing is a human problem, where customers need to be educated about how to recognize phishing and prevent it.

    If I went to and the page was perfectly Google branded and asked for my Google user id I would not enter my password. The reason being that the site is not served over HTTPS and the domain is not I am not sure what Google can do technically to prevent that phishing risk. Maybe if the site was I would buy your argument.


    “the site is not served over HTTPS”
    defcon/sidejacking/hamster/etc proves that even “hackers” will not follow this.

  7. RSnake Says:

    @Chris Clark - I know you mean well in what you’re saying but using yourself as a measuring stick for what people will and won’t do is pretty ridiculous. You and I do not make up the vast vast vast majority of people who will put their password in any place that looks and acts and in fact _is_ Google. I don’t know why you won’t buy my argument, literally hundreds of thousands, if not millions of identities have been stolen with less well thought out phishing schemes. Would you enter your info on an IP address? If not then you don’t represent the demographic I am concerned with.

    If Google is okay with the browsers marking their sites as phishing sites, then I’m okay with saying that it’s not an issue. Otherwise we’re not even empowering people with the tools necessary to protect themselves. Google gets a teensy bit upset at being called a phishing site. I know, because I’ve had to deal with their reports when they were being used as such. They are annoyed when their bottom line is hit - even at the expense of their consumers. You can’t have it both ways. Either fix your stuff, or let the anti-phishing technology protect all users from visiting your phishing site…. oooor be evil. Looks like they opt for the latter. Profiting at consumer’s expense.

    As a side note, being free of XSS is one of the things that is a requirement of any company going through PCI audits. Granted, they aren’t clearing credit cards through this interface (at least not yet). But I would just assume they would do the minimum PCI audit on any publically facing application. Granted, PCI audits don’t mean much, but they haven’t done even that. But apparently I would be wrong in that assumption, because according to Google, this is not a bug.

    So if it’s not a bug, they should feel confident in keeping it open. And if they won’t close it, people should feel comfortable using it in any way they see fit. And consumers should continue to be wary of any Google property. Quod erat demonstrandum.

  8. kuza55 Says:

    The question is, would a consumer be more likely to enter their details at than (if they looked and functioned identically)? If yes, then by how much?

    I don’t think the increase would be very noticeable, especially considering is not a well known domain, but maybe I’m wrong.

  9. Thierry Zoller Says:

    This reminds me of the FD post that XSS warning.

  10. Ronald Says:

    Doesn’t matter, they suck. :)
    Great post RSnake! haha good stuff!


  11. pdp Says:

    so what do u suggest then? the gadget needs to run from somewhere… also, what stops phishers from baying a domain like guniverse or gmobilemode, etc and using these for their phishing attacks? I think that Google did it right this time. However, their mistake is that their login box can be included everywhere. This increases the chances for confusion.

  12. Jake Says:

    I’ve been reading for a while — thanks for the forum, I feel like I’ve learned a lot!

    While I’m a little new at this, I’m just not sure if I see what threat Rsnake is getting at here, and what Google and Chris are saying make sense to me.

    Let me go over what I understand. Google is hosting a service called iGoogle, which lets people install little modules/gadgets/widgets there. A widget is basically a collection of javascript that’s iFramed, off It’s an iFrame since if it an app were hosted on, an XSS in either the application framing or the app itself could steal cookies.

    Today, you noticed that you could have a javascript XSS on gmodule. By design, you can’t steal cookies, but you’re talking about the dangers of phishing. Since gmodule is hosting arbitrary javascript anyway, isn’t this no different than any other website? There’s nothing to stop me from registering a domain, slurping Google’s html, then putting some javascript on that domain to make it look like Google’s. Your options would seem to be:

    (1) Host on a separate domain
    (2) Not host javascript apps
    (3) Use a restricted subset of javascript

    (2) seems to be a non-starter, since people love their widgets.

    (3) also seem to work well, since a bit part of the draw of the gadget is that it’s a _program_ for the web, not just some highlighting or font arrangement tool (like you’d find in a blog). It’s difficult to understand how you could ever have a black or whitelist that let you have something expressive enough for customer needs.

    (1) seems to be the only option, unless I’m missing something — and again, I’m new at this. Are there any solutions I’m neglecting?

  13. mybeNi websecurity Says:

    Haha the same thing happened to me: I found XSS in the Login Screen -> Responsible Disclosure -> NOTHING.
    Then the next try:
    XSS in Google Groups -> Full disclosure -> The BANNED my Email account (!!!) and no Mail / Answer

    And now last but not least I got another XSS Vulnerability and I won’t disclose it, there is so much that can be made :)
    And the great thing about this is (for me, a notorious Whitehat): I don’t feel bad and that is fucking great ;)

  14. hackathology Says:

    Great post Rsnake

  15. RSnake Says:

    @kuza55 - the answer is they would be far more likely to put their credentials in that site because the anti-phishing lists will never blacklist Google’s domains (per Google’s request). It’s far more difficult to put your username and password on a site that your browser is telling you is a phishing site.

    @Theirry - the blogspot hole is even worse in my mind. That’s something they also have claimed is not a bug but instead intended behavior. That would be an ideal place to phish users from, because people do trust that domain. People don’t understand that the cnames are untrustworthy. This isn’t about cookies, it’s about phishing - something Google hasn’t done nearly enough to help solve on their own domains.

    @pdp - I haven’t studied their business model for that page, so I can’t tell you if there is a secure way to build it or not. But there is a possibility it cannot be done. I have actually gone to high profile meetings talking about upcoming technologies where the only safe way to use it was to not use it. Sometimes there’s no safe way to deploy things, period. I’m not a defeatist in this case, because I know very little about that site, but if what you say is true, then perhaps the site is inherently insecure. That wouldn’t surprise me. Google has a number of inherently insecure apps.

    @Jake - when you say this is no different than hosting your own domain there are two reasons this isn’t an accurate measurement of the problem. Firstly, yes, you can register some domain, but you cannot “” which while not trusted in a consumer sense, it is intended to be a trustworthy domain, hence their branding. One way in which this could be trusted is in noscript for instance. If they require JavaScript to run widgets, people will naturally put it on their whitelist. The second reason this is different is that the browser companies have to maintain a list of sites that aren’t phishing sites but often get flagged as phishing sites. Google happens to host a lot of those. In reality Google is being used to phish consumers or redirect to them to phishing sites, but Google doesn’t really fix this problem. Instead they tell the browser companies to whitelist their sites, regardless of the fact that consumers are losing their identities as a direct result of Google’s actions in two ways 1) the vulnerability that they don’t close and 2) their insistence in being marked as a “good” site.

    So the major caveat here is that Google can no longer stop anti-phishing lists from blacklisting them if they are going to deploy insecure websites. If they stop insisting that they are safe then there’s no problem in their hosting of phishing sites. Or they could fix their holes. I don’t have a preference, as long as consumers get protected.

  16. Erwin Says:


    I got back the following e-mail when I reported the XSS in Blogspot.
    Some story. It’s a feature :)

    Maybe you need to update Wikipedia :P

    Hi Erwin,

    This issue you describe is not actually a vulnerability (and is not cross site scripting).

    Blogger allows blog owners to add arbitrary HTML to their posts - this is a feature, not a bug. Additionally, the actions you describe do not fit the criteria of cross-site scripting, described in detail here:

    “Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications which allow code injection by malicious web users into the web pages viewed by other users. Examples of such code include HTML code and client-side scripts. An exploited cross-site scripting vulnerability can be used by attackers to bypass access controls such as the same origin policy.”

    In this case, you are simply including allowed script in your blog. This does not constitute a security breach.


    Google Security Team

  17. RSnake Says:

    @Erwin - That’s a funny email. Google apparently hasn’t bothered to read all of that Wikipedia page that they cited to you. That is, in fact, what we like to call “stored XSS”, and yes, it very much is a vulnerability.

  18. RSnake Says:

    Actually, one more comment about that email that Erwin got. I’m actually pretty upset at the way Google treats security researchers. Lying to them about what is and isn’t a vulnerability isn’t going to help consumers. Instead of being forthcoming and saying, “Look, we know it’s a problem, and we don’t know how to solve it, do you have any interesting ideas on how we can close this without hurting our business model?” they are saying, “It’s not a hole, you don’t know what you’re talking about.” Given how often they have holes and how many of those holes are found by people who frequent this very website, I think they should change their tune, eat a nice big piece of humble pie and start treating security researchers with some common courtesy.

    As a side note and on the positive side, I heard that Michal Zalewski is now working at Google. I have _tons_ of respect for him and his work. He’s probably too new to have any say in anything that’s happened thus far, but I’m hoping he has a lot of influence on their team and makes some substantial changes within that organization. If you are reading this Michal, feel free to email me anytime.

  19. Laurens Greven Says:

    Hurray for the inside man! And props to google (and various other, safety-policy holding, unfair and ridiculous statement making companies) for denying that their so-called “features” contain bugs which attackers could use for malicious purposes!

    I would like to take my time to thank them all for ensuring safety for me and all the other people who use their services! Thanks!

  20. Ronald Says:

    Yah, so why ‘responsible’ disclosure, all they get is a free lunch and fix it next week in secrecy. In the mean time they mock it and downplay security issues, ah well isn’t this why full-disclosure came into being? It’s interesting that many vendors label such issues as a non-issue all the time, It’s just plain arrogance to call it a ‘feature’. I’m a bit tired of the ‘feature’ people these days, hence I stopped to let it bother me.

  21. MustLive Says:


    Nice vuln ;-). And because this hole is at Google’s domain makes it more nice.

    Google’s answer at your email is interesting. It means that Google not too much care about security, to said “it’s not bug, but feature” is simpler for them that to fix it. But answer at Erwin’s mail is fun, Cory is hot :-). They really have problems with security with such answers. Nice email Erwin.

    I heard that Michal Zalewski is now working at Google

    Man, where you was last time :-) (I know where, it is rhetorical). It’s old news already. I wrote about this news at 3rd of August ( And I hope Michal will help Google with security of their sites.

  22. Awesome AnDrEw Says:

    Silly Rsnake! This isn’t a bug. It’s a feature :). I mean isn’t that what security-handling is all about these days?

  23. kuza55 Says:

    Alright, I see your point, but its really hard to feel any pity for people who get phished, and I don’t think I would be willing to not develop an application because it could be used to fool people, *shrug*

    I know you just want to protect consumers, but there has to be a point where we say consumers have to take responsibility for their own actions, and we can’t hold companies accountable for their users’ actions.

    So sadly, I gotta side with Google here, they shouldn’t have to stop developing applications because users are unobservant.

  24. RSnake Says:

    @kuza55 - interesting opinion. Just for conversation’s sake, how many exploits in various browsers could have been chalked up to “user error” that allowed exploitation to occur? Sure, the user could have been observant and not done the one thing, like the drag and drop exploits. Yet they all had immediate patches applied. I think that’s an incredibly slippery slope to say users need to defend themselves from exploitation, especially when that exploitation is on “good” sites. It’s not like we’re talking about a random domain here that a bad guy registered, this is owned and maintained by Google.

    This is especially important on sites that are valid and supposed to be “good”. In this case Google actually actively prevents people from protecting themselves with modern browser security technology. It’s self serving entirely as it provides no security benefit to do so. I agree, there is a point when users do need to protect themselves, but that should include using existing browser security controls. If Google chooses to build insecure apps, that’s their choice. However, they shouldn’t degrade user security in the browser at the same time. For that they are absolutely in the wrong.

    It’s one or the other: Build secure apps or allow browsers to protect consumers from your site.

  25. David Says:

    I’ve read over the comments twice and I still don’t see what the big deal is. As someone else said, these user-submitted scripts have to run somewhere. There’s no “void” context that they can run within. Google put all of these things on their own domain to sandbox them. Sounds reasonable.

    I’ve seen the argument made that the page is “Google branded” and therefore more trusted. Other posters have pointed out that the “branding” on the page is meaningless. I could register any domain I wanted to and slap some Google “branding” on it. Users that are tricked into visiting the site have probably never visited it before. To them, it’s just another arbitrary domain name with Google branding on it. I don’t see where this “trust” issue comes from.

    The *only* possibly legitimate complaint about this issue is that Google is probably going to be less likely to list as a phishing site. But consider that someone first has to report a site as a phishing site before Google will list it as a phishing site, right? Have you considered that Google might just decide to fix the problem (remove the offending module from and solve the problem that way?

    Even assuming this is a real issue, how would YOU fix this?

  26. Edward Z. Yang Says:

    Here’s my spin on it: it’s traditional free web-hosting with an extra caveat: users are encouraged to be promiscuous about JavaScript they include from the domain.

    There are many free webhosters out there that cram all their shared websites onto one domain. The webhoster allows arbitrary JavaScript on the page, and the only way to distinguish between different websites is (usually) a root-level folder. If a Phisher registers an account, uploads a Phishing website (complete with JavaScript to nuke any sponsor ads), the only way for the webhoster to retaliate is… terminate the hosting. Constant vigilance, cleanup after the damage is done. From what I hear, these hosters have gotten quite nimble at shutting said sites down, so that it’s more profitable for Phishers to buy up domains or hack computers to do the dirty work.

    GModules takes it another step: they encourage other users to include code hosted on the domain on their websites. This is not new either: there are many JavaScript directories out there that don’t mind users direct linking to their scripts. Any person who embeds externally hosted JavaScript host is at the mercy of whoever is hosting the code. If I’m including a YouTube video on my website, I’m trusting YouTube not to start stealing cookies from my website.

    What GModules does is obfuscate the source of the script. We’ve already seen how URL redirection services can be used to hide the identity of Phishing pages; GModules takes it a step further and actually includes arbitrary JavaScript code on their domain, which was supplied by another party. As RSnake points out, this makes domain-based whitelisting by NoScript even harder to use: you either trust GModules, or you don’t.

    Google can do several things.

    1. They can make it possible to run the scripts directly off of the domain that is *actually* doing the hosting; their only relationship is indexing and a loader file that does its magic on the XML file
    2. They can partition the scripts into unique subdomains (probably a hash of XML file’s URL). While GModules is still in the domain name, the subdomain approach allows users to selectively whitelist widgets with our current technology

    On the user-side, we can do some stop-gap solutions:

    1. Block Gmodules entirely, until it lets us separate the good from the bad
    2. Extend our trust scheme so that the URL inside the URL is also considered; effectively, give Google a free cookie and make it a “special case”

    On the whole, though, the most damage is done by the way Google encourages users to sling external scripts from random websites without any regard to that website’s credibility or the script. If it looks nice, and it works, put it on your website! (cookie stealers be darned). And, in the end, USER ERROR! :-(

  27. RSnake Says:

    @David - I agree with your first two paragraphs, with two caveats. 1) Making it easier to phish users on your domain isn’t a good practice. 2) Yes, you could register any domain, but it wouldn’t be If they didn’t want their brand to be associated with it, they wouldn’t have named it “g” modules. It’s a Google website, that they want to protect the brand of. Otherwise they wouldn’t stop the browsers from protecting consumers by blocking it. See what I’m saying? Either they care about their branding (which they do) or they don’t in which case they wouldn’t care if they are listed as a phishing site. It’s bad for their brand, pure and simple.

    Which leads into your third paragraph. Yes I have considered that they would try a reactionary approach to this, but if indeed that is their approach they would have to admit that is unexpected and undesired behavior and then this conversation would be over with since that is one of my major beefs with them anyway. I’d be perfectly happy with that solution, if it actually worked. Unfortunately, they’ve tried this exact thing with their redirections being used to phish users. That’s worked spectacularly badly. So, if they do the same thing they always do and blacklist domains, they’ll have done nothing to prevent the issue, but they will have done something to react to the issue once it arises.

    To answer your last question, I’m not going to give free advice to an advertising company, even if I did have an ingenious answer off the top of my head. But, first thing’s first, we can’t talk about mitigation until Google admits its a problem, which they have already denied. I’m about helping consumers, not Google. To consumers I say don’t use Google. Since consumers are brainwashed by Google’s PR department and won’t stop using their sites I tell browser companies to remove Google whitelisting. Since Google’s lawyers would never allow that I tell Google to fix their holes. Since Google won’t even admit they have holes, I post it to my website for all the world to see. I just don’t think we should be applauding Google innovation at the cost of consumer safety, so I may try to solve this problem for another company afflicted with the same problem, but Google would be demonstrating severe hubris to think I care about solving their problems, with the caveat that I do want to find ways to save consumers from Google’s failures.

    Now, what’s the right long term answer for all sites everywhere? I’d probably build content restrictions in the browsers. We need to figure out a way to reduce the things code can do when injected on a page, when we know it can and will be malicious.

  28. John Says:

    @RSnake: Re “the blogspot hole is even worse in my mind. That’s something they also have claimed is not a bug but instead intended behavior. That would be an ideal place to phish users from, because people do trust that domain.”

    Phishers cannot put anything on They can put things on, where X is any of several million subdomains. I hope that blacklisters understand this and blacklist the right level when fighting phishing.

    This is a consequence of allowing users to publish things on their own unique domains without requiring domain registrar fees. If there’s another solution, please propose one.

  29. RSnake Says:

    @John - I know how it works. If you read the rest of what I wrote, I said, “People don’t understand that the cnames are untrustworthy.” I’m know the semantics, but you have to realize there are only a very small handful of people compared to the population of the Internet who realize that is not the same thing as in terms of security. So if asks for a blogspot password most people would enter it. Blacklisting only works after the fact. It would be much better if Google simply didn’t allow the hole in the first place.

    But to be clear, because I think a lot of people really are clueless about anti-phishing technology, this isn’t about how the blacklist is created, it’s how the whitelist is created. You cannot whitelist without rendering the whole concept of blacklisting useless for the subdomains. Whitelisting always takes precedence.

  30. kuza55 Says:

    I know its a slippery slope, but I’m not about to start blaming users to exploits, even if they require user interaction, because we have given them no information in regards to those issue, and they’re our responsibility to fix anyway. And while things like the Office exploits could be easily stopped if users didn’t open them, there’s no way I’m going to blame them for that (unless they’ve been explicitly told, and have had the reasons explained, etc by their company or whatever, but then its an issue of violating company policy), because they have no information from us from which they could base any caution on, we’ve just been telling people “don’t open untrusted executables”.

    Would I blame people who get infected by trojans? Yeah, I would; downloading untrusted apps goes against everything users get told. The one exception I would make was if their mail client didn’t make it clear that something was an executable, e.g. if it didn’t show the extension or something. If I got infected by a trojan it would be my own damn fault; I knew the risk, and I ignored it, boo hoo.

    I think the line should be drawn where we’ve already told people, “don’t do this”, or “only do this”, and they disregard those instructions, phishing and trojans are prime examples.

    Personally, I don’t think Google would be in the right if they stopped their site being blacklisted, those lists should be independent, and uninfluenced by companies, etc, but I live in a perfect world, :p

    And as such, I think that when we see phishing sites popping up, and Google stops itself being blacklisted, *then* its time to be outraged and pissed off at Google for hurting consumers, atm I’m willing to give them the benefit of the doubt, since they haven’t explicitly said they’d do anything one way or the other.

  31. RSnake Says:

    @kuza55 - You said, ‘we’ve just been telling people “don’t open untrusted executables”’ That’s actually a really tough problem. Define a “trusted” executable? If I can put my executable on Google’s site does that make it trusted? Would you download _anything_ on their site? What makes it trusted? Do you validate that the code is signed, that the signature is by someone trustworthy, that it wasn’t tampered with, and that the code is non-malicious? Or how about this. Would you click on any link on Google with a default install of a browser? I don’t blame users at all for getting infected. Sure, they could do things to protect themselves, but they don’t know how. Did you start using the Internet knowing everything you know now? Of course not. You had to learn by failing. The same is true with consumers.

    You are totally within your right to think what you want about Google. I’m only trying to raise people’s awareness of a problem and it’s up to you to make your own opinions. But yes, there is precedence for my issues with Google. They have been used in many many phishing attacks using redirects and other bad things. They have only closed a few of them and by closing some of them them they have admitted that it’s a problem, yet there are dozens left on the platform and I’ve been talking about this for over a year and a half now. I personally think it’s well beyond the honeymoon period for their failures.

    Since so many people want to know what I’d do for the XSS hole, I did take another look at their site. There’s no reason they have to host these apps themselves. They could easily tell the sites to host them themselves as Edward suggested. Google could simply become a directory for said apps. That would completely secure them. They could iframe the apps on other people’s sites if they really wanted to, although that would add other risks as well and I would recommend against that. A simple link, with perhaps a picture of what the app looked like (a la Mozilla’s addons) is more than enough to let people decide if they want to continue to check it out on the people’s websites.

    But that would require that admitting that hosting malicious JavaScript could indeed be a security risk to their consumers. I’m not holding my breath. However, Google did surprise me once before - Google did change their tune about redirects after a year of my explaining it to them and after aiding numerous phishing sites. Go Google. I’ll open a bottle of champagne when the last redirect hole in Google is closed. In fact, I’ll do one better. I’ll shut up about Google’s problems _entirely_ when the very last redirect hole in Google is closed.

    I’ve told Google employees at Blackhat the same thing and I meant it. I’m bored of talking about Google and since I first explained the problem more than a year and a half ago we still haven’t seen the problem fixed. They have fixed a few but definitely not all of them. So here’s my deal. Google, fix your redirection holes and I’ll leave you alone. Pretty please? For all the people on the Internet who can’t defend themselves?! And yes, if I can redirect through XSS that counts too. No cheating!

  32. kuza55 Says:

    You’re right, of course, I am, as always, short sighted when it comes to users doing ’something wrong’.

    I’m not disputing that this can and will be abused, but I do still think that Google should just not stop people blacklisting their sites when they end up hosting phishing sites.

    I say that because even if gmodules just became a directory, I’m sure that the people who would get phished when an attacker utilises javascript, would probably still get phished if someone registered and hosted an app there, but when people came there from gmodules, the site would look like a Google site and show a login. Maybe an explicit warning saying that any link that you follow is not under Google’s control would help, but you never know.

  33. BK Says:

    I’m with RSnake on this one…. the whole concept of “Taking Ownership” of someone elses content is a little scary. The reason ownership of content is so scary, is because the entire trust model for the WWW is basically built on ONE thing… the DOMAIN NAME.

    Same Origin Policy, Phishing filters, SSL certs, even human trust… all basically rely on the domain name… they have to … it’s basically the only thing we can put “trust” into. Any attack that undermines this trust is dangerous. While many people (myself included) “trust” nothing… that isn’t the case for 99% of the people on the “Internets”.

    I think people like Jeremiah Grossman and RSnake have shown that XSS isn’t about stealing cookies anymore…. add in the fact that URI vulnerabilities can be executed via XSS and you’ve got serious problems…. It surprising to see how many people that still believe XSS is just a “cookie stealing” issue (my most recent encounter was Comp Sci PHD candidate!).

    Lastly, many domains on Google are interchangeable or provide redirection to other google domains… so while you may be wary of, you may be more willing to accept a link. (once again, taking advantage of domain based trust…)



  34. just a lurker Says:

    “The reason ownership of content is so scary, is because the entire trust model for the WWW is basically built on ONE thing… the DOMAIN NAME.”

    and that’s reason enough all on its own

  35. David Byrne Says:

    I think this comes down to the implications of hosting third-party malicious content. Google can accurately claim that this isn’t much different than many other services they and others offer. As several people have already pointed out, Blogspot allows almost any HTML content.

    Of course, general hosting providers have even fewer content restrictions. The difference is that Google is putting their name on it. Even sophisticated users are going to put more trust in something that is sent from Google servers, even if it was actually written by Bob the Bot-herder. That level of trust shouldn’t be unreasonable; Google should be vouching for the safety of content they host.

    This points to a lack of adequate filtering technologies for web content. The browser-based content restrictions that RSnake is promoting may be the best long-term solution. Before that happens, a complimentary technology that requires less integration would be a good option. I would like to see Google and similar sites use filtering tools that allow users to upload innocuous JavaScript/HTML/etc, but block behavior that is more dangerous. That’s certainly a large undertaking, but I’m confident that Google has the money and brainpower to pull it off. Sharing the tool with the public would go a long way towards endearing Google to the security community.

  36. Erwin Says:

    What I find rather disturbing is that if we cannot convince Google of what XSS is and what it can do, how can you convince other non-tech companies to patch the holes?

    Hmm, reminds me of the I Love you virus :)

  37. Jon Longoria Says:

    The issue is alive on TheRegister :)

  38. pdp Says:

    ok, so what do u suggest? it seams that Google has no options at all in this case?

  39. RSnake Says:

    pdp - I already made suggestions here. But Google has every option in the world. They may not like any of them but they hold all the cards. Consumers are the ones with no options - save not using Google, and/or not trusting Google.

  40. 6d@anteater Says:

    @RSnake: What you have described is what I call the “Deflected Impacts Problem”. The decision-making party gains benefits and deflects risks to a non-decision making party. The non-decision making party doesn’t have a viable option and has no feasible means with which to reflect impacts back to the decision making company. This skews the risk rating downwards and reduces the minimum level of reasonable controls.

    In this case, Google provides a service according to its business model and the consumer carries the risk of XSS impacts. Consumers can’t pass the impacts back to Google in the form of lawsuits, because Google doesn’t have a reasonably enforceable legal duty to protect consumers against XSS. While I’m not a lawyer, the following is the sort of argument that I would expect to hear in risk evaluation/mitigation discussions:

    1) It is unclear whether Google has a legal duty to meet a security baseline at all;
    2) It would be difficult for a consumer to show quantifiable losses (damages) and to prove that Google was the actual vector for the particular attack (causation);
    3) If there is breach of duty, Google has a multi-layered legal liability shield including an indemnity clause [1] and a limitation of liability clause [2];
    4) If the consumer were provably harmed and there were a duty and the liability shield were pierced, then the murky definition of “reasonable” within an information security context and pervasively poor industry practices would show that the duty was met;

    If we want to force companies to take security seriously, the “Deflected Impacts Problem” must be solved. Bad publicity, such as this thread and the guardian article, helps to address the problem in specific instances by creating a negative impact on corporate reputation, but leaves the general problem unsolved.

    [1] Syndicated Google Gadgets Terms of Service for Webmasters
    “In the event of a legal claim against Google arising from your use of the syndicated gadgets, you agree to indemnify Google for all liability and expenses incurred as a result of that claim.”

    [2] Google Terms of Service

    [3] Wikipedia’s negligence entry

  41. Jon Longoria Says:

    Posted a article up mainly for awareness sake, presenting your mission on this to get Google to understand the issue at hand @ .

    I’ve gotta tell you buddy, I don’t think I’ve been this agitated with an online firm since Anti-Online’s escapades of ineptitude.

  42. Jordan Says:

    This has been a great thread. I definitely started out on the side of the “it’s not really a big deal” camp, and while I’ve swung somewhat to the middle, I’m still not on the “as bad as anti-online” side.

    As hotly debated as the topic is here–from some very smart folks no less–it should be clear this is not a cut-and-dry case of google being in the wrong. There’s good arguments for both sides.

    @RSnake: Does google explicitly whitelist, or do they just from ever being added to blacklists?

    If it’s the later, then it’s trivial to allow blacklisting of individual subdomains while still protecting the parent domain. If so, that’s one possible solution to the problem. Allow every module it’s own third level cname. Individual modules could easily be blacklisted, while the parent domain is white-listed.

    While I agree that it’s a bit of an issue that Google’s both hosting content and forcing the white-listing of that content, I really find it hard to get that worked up over. There have been bigger mistakes from Google in the past, and I expect much bigger mistakes from them in the future — unless of course lcamtuf is given free reign to clean house. ;-)

  43. RSnake Says:

    @Jordan - I don’t know for a fact that they do or have ever whitelisted blogspot, but they have done so with other domains in the past. They do not indemnify anyone to blacklist them. The way the whitelists work may vary between systems but from what I have seen they work by domain or IP, not by cname, or URL.

  44. Jordan Says:

    If it’s by top-level domain, yeah, that’d be a problem, but I would have to imagine that most of them are able to specify a FQDN in which case my proposed solution seems like the easy way out.

    Give every module on a different sub-domain and then any of them can be individually blacklisted without causing problems for the other domains.

    Though in google’s defense (can’t believe I’m saying this — I’ve been less than impressed with their security record too!) maybe they don’t think they need to blacklist those domains since they think they can clean up any reported phishing on the site as fast as the domain could be added to a blacklist. Seems unlikely though, and I’m not about to be the one to try to test it and find out.

  45. RSnake Says:

    @Jordan - that may work in the case where they can specify FQDN, but in my experience the ones I have been exposed to do not do that. Also, that wouldn’t work on Gmodules since many URLs on the same domain could be bad.

  46. 6d@anteater Says:

    @RSnake - I don’t follow what you mean by “many URLs on the same domain could be bad.” What is the downside?

  47. RSnake Says:

    @6d@anteater - The downside is you’d have to whitelist “” since you it’s not seperated by cnames like blogspot is. Make sense? You can’t rely on cnames because they aren’t used.

  48. 6d@anteater Says:

    @Jordan - Let’s say, for the sake of argument, that anti-phishing software could use the FQDN, Google puts each module in its own CNAME on and changes its X.509 certificate from to [CNAME] Now, where are we?

    The duration of exposures could be reduced by adding the CNAME to the anti-phishing software’s black lists and end users can be protected after the fact. That is an important improvement. However, it is a reactive, rather than a proactive defense. The threat agents would always be ahead of the defenders. Defense costs would be expensive and ongoing. End users would retain significant residual risk.

    The risk remains, because the trust model is still being broken: a trusted x.509 certificate is still introducing unverified software to end users.

  49. 6d@anteater Says:

    @RSnake – That makes sense for the current system on gmodules.

    Changing scenarios.

    Let’s consider the scenario that I think Jordan was suggesting whereby each module is placed on a separate CNAME. In that case, there would be a one to one correlation between the FQDN and the module. So, the FQDN (and the associated module) could be blocked. In that scenario, which is different from the current system, is there still a downside to leaving the URL path and parameters unchecked?

  50. RSnake Says:

    @6d@anteater - Yes, if Google decided to put each module in it’s own domain and allow anti-phishing technology to blacklist those cnames and whitelisting worked on FQDN instead of just the domain that would work. Lots of ifs there, but yes.

  51. David Says:

    Why spend so much time/resources facilitating 3rd-party blacklists (through separate domains) when you can just e-mail Google and have them remove the abusive module from their site? This isn’t some web site on a compromised server in China. You’d want to e-mail them anyway to ensure the culprit doesn’t try again.

    Separate domains would be Nice To Have, but isn’t a huge improvement over the existing process for removing abusive modules (for everyone, not just consumers of your blacklist).

  52. zoob Says:

    text only website under construction: project 1
    noob learning as fast as possible:project 2 below
    wish to allow xss to individual domain using noscript on firefox for some clients (charitable nfp private organisation) with xss blocked in about:config ( this info will then be forwarded to hp as to why I have trouble using their portal, with your permission obviously)
    registered email is leaky occasionally
    alternate means available if needed (lol-not likely)
    kind regards and best wishes in your good work
    nb -apologies in advance if this is a “stupid question” or one asked in a “stupid way”

  53. x2Fusion Says:

    I’d just like to tell y’all that it was I which found this vulnerability in gModules / Google!

    Thank you,

  54. Sum Yung Gai Says:

    Here’s an idea:

    How about simply turning off JavaScript and cookies, except for those specific sites (e. g. your online banking site) where you actually *need* them?

    A convenient way to do this in practice is to use a dual-browser strategy. Specifically, I mean using something like Epiphany or Konqueror, with JavaScript/cookies turned off, for normal Internet Web surfing/searching. And then, for your bank’s site or your corporate intranet sites that may require it, fire up Firefox with that stuff turned on.

    This strategy actually works out very well in practice, and I do this it all the time on GNU/Linux and *BSD (I run both KDE and GNOME). Mac OS X users can do the same thing (substitute Safari for Epiphany/Konqueror). MS Windows users can do it with, say, Opera (but *NOT* Internet Exploder!).


  55. Jon Longoria Says:


    As I stated on theReformed (where you commented this very same statement), I question the validity of that statement, especially when his article is published as of “20070817? and yours was supposedly published as of “20070820? (three days later). That isn’t to mention that at least half a dozen people were aware of the issue at least 2 months before the published article on either count.

    Additionally, I might add that it’s a petty move to post it publically to his weblog or ours @ theReformed when you could have just as easily e-mailed him and discussed it privately. RSnake is a pretty reasonable fellow and would have surely negotiated credit if you were due it.

    Instead of concentrating on the fact that the problem is being addressed by outside parties because Google refuses to, you’ve only concentrated on ensuring you get your name in lights.

  56. RSnake Says:

    Interesting follow up on CNET:

    One correction though, I never claimed that whitelists are shipped with the browser. What I have seen is actually a server side process that removes anything that matches the whitelist before transmitting it to the browser, which reduces the size of the downloaded blacklist pretty significantly.

  57. harsh punishment Says:

    I’m a user, not a hacker - probably inviting alot of heartache here by posting. Regardless, the actual damage these malicious scripts, and not just in the form of XSS and trojans, do every day is far worse than what was done on 9/11.

    For the average user, its not just about the heartache and lost time when you actually do get malicious code on your computer that’s a problem, its also the resulting doubt about causes when you don’t understand why your computer isn’t behaving as expected.

    Its a constant sense that anyday the sky is going to fall and I have no idea when or why. Here’s a prime example that just happened “Receiving’ reported error (0×800CCC0F) : ‘The connection to the server was interrupted. If this problem continues, contact your server administrator or Internet service provider (ISP).’ from outlook. I have no idea what might have caused the problem. So, because I got the message, I am now wondering if I have some kind of malicious code on my computer that is going to end my world - despite the fact that I have backups and antivirus and firewall and antiphishing, and . . . and . . .

    It seems crazy that we are sending small time drug fiends to jail and in the meantime these $&*%(*%& are totally undermining productivity on the internet. If Google is so willing to turn over customer information to the spooks, then we need to start locking the writers and posters of this code to jail FOREVER! Lethal injection, electrocution, hell, send them to Guantanamo and let Rove interrogate them.

    At least its nice to know there are people like you out there trying to do what is right for consumers — even if I don’t really understand what you are talking about. By the way I have an MBA, completed in the last 5 years with a 4.0 GPA. I’m not lacking in intelligence, just lacking in time to mess with something I look upon as a business productivity tool. No different than my car.