Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'XSS' Category

XMLHTTPReqest “Ping” Sweeping in Firefox 3.5+

Monday, July 20th, 2009

Jeremiah brought my attention to the new Firefox 3.5+ CORS (Cross-Origin Resource Sharing) which is a way to do a cross domain XMLHTTPReqest. Does that sound scary? Well, it is, but there’s been a ton of work into hardening it. It has all sorts of cross domain opt-in verification built into it to limit the abuse. Honestly, if you look at the people who were acknowledged in it’s construction, it’s a who’s who of people who understand cross domain browser security issues. So it wasn’t surprising that it was fairly free of obvious flaws.

Anyway, I was poking around with it and I noticed that it had one fairly strange issue. Although an attacker is not allowed to know if the page was there or not (only if it was allowed to see the content or not), the attacker is still allowed to make an initial request. In doing so that initial request can be used as a pseudo “ping” sweep. You can tell if the site is there or not because it will either return immediately (latency and threading applies) or it will wait around much longer (between 20-75 seconds on the several networks I’ve run this on) before the browser gives up. That timing difference is pretty substantial - and as a result you can enumerate a substantial amount of internal address space behind the victim’s firewall and relatively quickly. I created a demo here (works only in Firefox 3.5+ and you must enable JavaScript globally for this to work). It won’t work if you just whitelist ha.ckers.org you have to globally allow JavaScript if you use Noscript for the demo to work - and you must disable ABE in Noscript as well.

You can read the page for the details, like the fact that basic and digest authentication popups are suppressed which makes this technique ideal for Intranets where those are common and would normally alert a user to the fact that something was wrong in the browser. It also doesn’t matter whether you do or don’t have port 80 open for this to work, I should note that there is a IE8.0 version of Firefox’s XMLHTTPRequest called XDomainRequest, but I didn’t have much time this weekend to try to get it working in both browsers so I have no idea if it has the same issue or not.

Incidentally, Jeremiah and I both gave the thumbs up to the idea of a cross domain XHR several years ago when the Mozilla team first asked us about the concept. Because there are so many other things wrong with the browser Jeremiah and I told them that it wouldn’t change much - the browser is already so broken from a security perspective that it really didn’t matter - a sad commentary thinking back. Of course, it really is all about the implementation.

More Intranet Cookie XSS Fun

Thursday, January 22nd, 2009

Amit Klein and I’ve been going back and forth for the last few days regarding my last two posts, how browsers cache requests, how that can be abused, etc…, and in the process of it, Amit came up with another interesting way to do the same thing but without requiring any DNS rebinding whatsoever. Here’s his idea:

BTW, I can improve your attack (I think), by eliminating the need for browser restart. If www.attacker.com sets domain-wise cookie for all “.attacker.com”, and then forces navigation to say target.attacker.com (that maps, even statically, to 10.10.10.10), you have your XSS delivered.

He’s right - that would work. So really, being able to set cookies for an entire domain is actually a security issue, since it can actually impact functionality on other websites that aren’t owned by the domain owner. Interesting take. Again, while this wouldn’t give you access to the user, it might allow you to change site functionality, inject XSS, insert erroneous tracking information or something else - whatever could be done from an un-authenticated user state.

Persistent Cookies and DNS Rebinding Redux

Tuesday, January 20th, 2009

In an attempt to clarify my post on the dangers associated with persistent cookies and DNS rebinding, I’d like to give a simple scenario and then describe solutions. Let’s say there is an intranet website called intranet.exploitable.com that resolves to 10.10.10.10, and there is an attacker website called www.attacker.com that resolves to 222.222.222.222. Now let’s say intranet.exploitable.com typically sets a cookie that has also has a known XSS vulnerability in it (could be known because the attacker knows what sort of open source software is used internally, or they were once a contractor or whatever…). Now let’s also assume that the website is not SSL, as most aren’t and it would mess up the attack with a mis-match SSL error.

Okay, so the victim user visits www.attacker.com who sets the same cookie as something like this:

Set-Cookie: last-visited=<script>alert("XSS")</script>; path=/

Then the user shuts down their browser, or the attacker forces a browser shutdown through any one of the dozens of browser DoS scripts out there. Eventually the user goes back to www.attacker.com, but this time, the site changes it’s DNS to point to 10.10.10.10. Because the browser was shut down, the DNS for www.attacker.com is now allowed to be rebound to the new IP address, which happens to be the IP address of intranet.exploitable.com. The user now visits that site with the XSS exploit in their cookies, with the incorrect host header:

Host: www.attacker.com

However, because most sites don’t care about host headers, the request is still parsed by intranet.exploitable.com’s website. The XSS is now running there. While this wouldn’t allow the attacker to log into their account, it would allow them to “see” what is running on the victim’s intranet website, by using an XSS shell. Although this attack may take a while, it’s not that difficult, compared to a lot of other rebinding attacks.

Now in terms of mitigation, there’s a whole host of things you can do if you happen to run intranet.exploitable.com. Firstly, using SSL would stop this attack because of the SSL to hostname mis-match. Secondly, not allowing any unknown host header to be sent would stop the incorrect host header from being processed. Using client side protections like LocalRodeo would stop the intranet from being contacted as well. Lastly, making sure that _all_ cookies are removed upon each shut down of the browser would stop the attacker from being able to re-use their cookies after having forced the victim’s browser to shut down. I hope all that was a lot more clear.

Dangers Associated With Persistent Cookies and DNS Rebinding

Monday, January 19th, 2009

Let’s talk about cookies for a moment. Cookies, for the most part, are persistent. Sure, more browsers are trying to make it easier to reset cookies, but still, a fairly small group of users regularly remove cookies on each browser shutdown, or with enough regularity to make a difference. Now let’s talk about web servers. Web servers, for the most part, don’t care if you send a host header. If you send nothing, it’s the same as if you sent the host name. This has never seemed like a great idea to me, but nevertheless it’s really common, except in virtual hosting environments.

Lastly, let’s talk about DNS rebinding. The browser typically pins the DNS to the same IP-to-DNS mapping for the term of the browser session (DNS rebinding exploits notwithstanding). I’m not trying to discount DNS rebinding exploits for the moment, but rather, there may be another vector here, even if DNS rebinding ever does get “solved” by being able to map the IP to the DNS for the term of the browser session.

Now, remember, the cookies can outlive the browser session, and that is entirely based on how they are set, unless the user intentionally removes them. So let’s take a scenario of a message board that allowed crossdomain images. An attacker could submit two cross domain images. The first being to http://www.attacker.com/image.php and the second being http://victim.attacker.com/somefunction.php. The first image would set a cookie to test and see if the user had visited that domain before. If not it sets a cookie. The second domain (victim.attacker.com) sets a cookie that is designed to be used on www.victim.com.

At some point down the road, the user reboots their browser window. Later they go back to the same board and visit the www.attacker.com image which alerts the DNS for victim.attacker.com to switch to www.victim.com’s IP address. Although the hostname is wrong, the cookie is still sent. Since the cookie doesn’t contain any information to tell the server that it’s being sent a cookie from another subdomain, the server helplessly executes somefunction.php with the credentials set by the attacker.

While this won’t help an attacker gain access to the user’s cookies, it will allow them to set their own, which can have all sorts of nasty side effects, depending on what those cookies control. There are some XSS attacks that can only be performed by setting a cookie that contains the payload, which is normally a non-issue since being able to set a cookie cross domain is supposed to be impossible. However, this scenario would enable those attacks, unless it’s a requirement that that same cookie also contained session information that the attacker couldn’t know since they won’t be able to read the user’s original (legitimate) cookies for www.victim.com. Of course it’s just far easier to do this kind of exploit by just waiting for a certain amount of traffic to hit your site and then just switching the DNS over as well, but that’s just too easy of an attack. ;) Anyway, it’s just a thought!

RequestPolicy Firefox Extension

Saturday, January 17th, 2009

Over the last few days I’ve had the pleasure of corresponding with Justin Samuel, who has recently authored a new module called RequestPolicy that has some pretty wide reaching security implications for anyone who is concerned with cross domain related exploits. Here’s a snippet of our conversation:

RequestPolicy gives users full control over the cross-site requests made by their browser. It has a default deny policy and allows easy whitelisting of origins, destinations, and origins-to-destinations.

The website is here:

http://www.requestpolicy.com/

You can probably imagine the various security issues this helps with (not just CSRF, but that’s a big one). We have a security page here with some details:

http://www.requestpolicy.com/security

I see RequestPolicy as fulfilling an essential role for privacy and security in our browsers. I believe that a truly secure Firefox install is running at a minimum both RequestPolicy and NoScript. (RequestPolicy is not a competitor to NoScript, obviously, but unfortunately a large number of people immediately think this because they are unaware of threats that aren’t from scripts and objects.)

Justin has a bunch of things on his to-do wishlist, including improvements to the UI, more granular control over what gets blocked, a blacklist of subnets (similar to localrodeo), and so on. Of course there are a few small issues that I ran into almost immediately, like the fact that subdomains are always allowed, which means an attacker could subvert that protection by assigning a subdomain to RFC1918 (assuming LocalRodeo wasn’t installed) or to a target domain that required no cookies to be submitted for the exploit to function since the wrong hostname would be sent. So perhaps for the time being a combination of LocalRodeo, NoScript and RequestPolicy is the safest bet.

It’s also fairly easy to detect that this module is installed, and for most users, it will be a very tough user experience to get used to, unless they whitelist everything. Still, very cool module to prevent most of the crossdomain/cross website client side hacking, and I bet it will become even better with time!

HTTPOnly Fix In MSXML

Tuesday, November 11th, 2008

I’m happy to announce that Microsoft has released MS08-069 today. It’s got a lot of changes in it, but one in particular that I’ve been tracking for about a year now. MSXML has made a change so that HTTPOnly cookies cannot be read by XMLHTTPRequest within IE. Why is that good? It makes it so that JavaScript can no longer steal cookies that try to protect themselves. That’s a good thing.

It might seem like a big thing that that was even possible, but really it’s not as bad as it sounds, making this issue a lower priority in my mind. Cookies are rarely sent from the server to the client on every request and typically do require some information to be sent (like a username and password) before the Set-Cookie header is sent. So XMLHTTPRequest was really only useful for stealing cookies if the Set-Cookie header was sent on every request. Maybe there are some sites out there that do that, but it’s not that common. Either way, I’m glad MS got around to fixing it.

Meanwhile, the other browser that has implemented it of note is Firefox, and I hear rumors that they too are fixing this problem although I’m not sure on the timeframe there. So good news all around for HTTPOnly - the little non-standard cookie directive that could, and one of the few practical defenses against credential theft in the face of XSS.

Lifelock Protects You from Clickjacking

Monday, November 3rd, 2008

Well, now I’ve seen everything. Just when I didn’t think I could ever be amazed more by attempts of overselling and snake oil, I get hit with this. Apparently Lifelock now purports to protect you from clickjacking. For those of you who don’t recall, Lifelock is the service that protects your identity, except for that one time when it doesn’t. But that’s neither here nor there and water under the bridge and all that. Here’s how lifelock protects you from clickjacking…

You log into your home firewall/router and forget to log out. Then you wind up on some compromised website and someone clickjacks you (regardless of browser - I have no idea what that Lifelock comment means, no browser has patched against it) and gets you to change your DNS to use an attacker controlled DNS server. Now every page you go to is effectively man in the middle’d. But instead of taking over every page the attacker takes over Google Adwords, since that effectively XSS’s every domain, and they can monetize their own sites in the process.

Next the attacker begins to steal your credentials to your accounts, and unfortunately you aren’t super good at using unique passwords, not that it matters since they can use forgot password and change password functions via XMLHTTPRequests and credential theft/replay. Plus since they own pretty much every webpage you go to and you rarely patch Adobe Flash, they are now listening to your microphone through a second clickjack. Now as you give up all your sensitive info on the phone with your bank, credit card companies and more they are right there listening via their version of Back Orifice for the web - because that’s what we’re really talking about here with clickjacking, isn’t it?

Anyway, next the attacker figures out where you work and begins to infiltrate using webmail. Soon they have access to most of your life, have installed malware in lieu of something you thought you were downloading over HTTP. Now, with their newly installed malware/keystroke logger they have access through your corporate VPN tunnel and they have access to all your online accounts work related or otherwise.

Then they begin to wire funds out of your account, attack your company, and use your machine as a child porn server since they can put your computer into the DMZ, having long ago compromised the firewall/router, running a brute force attack against it through their malware. Lastly, just for grins they compromise your Lifelock account, since you log into it from the same compromised machine, and they request to cancel it on your behalf.

So after the police come to your door to arrest you for proliferation of child pr0n (your wife leaving you for the same reason of course), and for the added charge of industrial espionage against your own company, and you realize that your bank account has been raided, and your identity has been stolen, at least you have someone to talk to over at the Lifelock helpline. Good luck getting your life put back together, I’m sure they’ll be very sympathetic with an incarcerated pervert who is awaiting trial and can only be reached at the federal holding facility, especially after you tried to cancel your account with them.

Yes, this is all just a wildly overly dramatic scenario, but so is the Lifelock’s statement. In their defense they probably meant it only as it relates to identity theft, not at all understanding any of the other possibilities relating to clickjacking or the hacking/security world as a whole for that matter. But isn’t that the point? If you don’t get it, you probably shouldn’t pretend you protect against it in any meaningful way. Consumers might not know the difference, but a hacker does.

More McAfee Snakeoil Ranting

Friday, October 10th, 2008

I know a lot of people are just tired of the same old PCI ASV rant that really surfaced last year, but I got an email today and I thought it was worth a re-post. Mike Bailey sent this over and I re-printed it with his permission:

I’m hoping you’re interested in this, seeing as your sites were the source of a lot of the original Hacker Safe/McAfee Secure drama. Russ McRee and I have been doing a lot of research about the certifications over the last few months and have come up with a huge amount of new material.

The main points:

* We have found new XSS exploits on McAfee’s on websites
* We have a long list of more sites with XSS, CSRF, SQLi, RFI, and other holes that are supposedly “McAfee Secure”.
* We got a PCAP of a scan and discovered that they do indeed fuzz for XSS (there was a lot of speculation about this on the sla.ckers.org forums a while back)
* McAfee is beta-testing a meta-shopping service where one can shop on “McAfee Secure” sites to ensure that they can be trusted
* This service is itself full of holes
* McAfee promised to publish the standards that they use for certification several weeks ago. They haven’t, and from what I’ve heard (Russ has seen a draft), what they have is extremely broken

I’m starting to release details on my blog (shameless plug, I know, but hear me out). The first post can be found at: http://skeptikal.org/index.php?entry=entry081009-213000

Honestly, I wouldn’t care if you reposted the details on your own site-I’m just trying to get the word out about this. I frankly think we have enough concrete evidence to put serious doubt on their abilities as a PCI ASV, and to expose the McAfee Secure certification for what it is. I just don’t have the level of exposure that will be necessary to do so.

I’ve been talking with a few other people about it, and decided that you were the first place to go for that.

Being someone who is constantly fighting against snake oil, I’m happy to repost any rants people have about snake oil. For the record, I understand the business reasons behind going to the low cost ASVs - because that’s all the PCI requires. I just happen to think you should do a good job, even if you are going to try to undercut everyone else.

I heard a rumor from a friend of mine who took another ASV and put up a website with exactly three links from the main page. One to an XSS vuln, one to a SQL injection vuln and the last to a command injection vuln. The scanner didn’t find anything, even though that’s the only thing you could even do on the site. Completely safe. He asked me not to mention their name, but I certainly wouldn’t stop someone else if they wanted to do their own research and happen to find the same thing. That is all.

Clickjacking Details

Tuesday, October 7th, 2008

Today is the day we can finally start talking about clickjacking. This is just meant to be a quick post that you can use as a reference sheet. It is not a thorough advisory of every site/vendor/plugin that is vulnerable - there are far too many to count. Jeremiah and I got the final word today that it was fine to start talking about this due to the click jacking PoC against Flash that was released today (watch the video for a good demonstration) that essentially spilled the beans regarding several of the findings that were most concerning. Thankfully, Adobe has been working on this since we let them know, so despite the careless disclosure, much of the work to mitigate this on their end is already complete.

First of all let me start by saying there are multiple variants of clickjacking. Some of it requires cross domain access, some doesn’t. Some overlays entire pages over a page, some uses iframes to get you to click on one spot. Some require JavaScript, some don’t. Some variants use CSRF to pre-load data in forms, some don’t. Clickjacking does not cover any one of these use cases, but rather all of them. That’s why we had to come up with a new term for it - like the term or not. As CSRF didn’t fit the requirements for clickjacking, we had to come up with a new term to avoid confusion. If you like Michael Zalewski’s term “UI redress attack” better use that one, it’s just not CSRF and shouldn’t be mistaken for any other attack, since it really is different. Here is the technical detail:

Issue #1 STATUS: Unresolved. Clickjacking allows attackers to subvert clicks and send the victim’s clicks to web-pages that allow themselves to be framed with or without JavaScript. One-click submission buttons or links are the most vulnerable. It has been known since at least 2002 and has seen at least three different PoC exploits (Google Desktop MITM attack, Google Gadgets auto-add and click fraud). All major browsers appear to be affected.

Issue #1a STATUS: Unresolved. JavaScript is not required to initiate the attack as CSS can place invisible iframes over any known target (EG: the only link on the red herring page). Turning off JavaScript also neuters one of the only practical web based defenses against the attack which is the use of frame busting code.

Issue #2 STATUS: Unresolved. ActiveX controls are potentially susceptible to clickjacking if they don’t use traditional modal dialogs, but rather rely on on-page prompting. This requires no cross domain access, necessarily, which means iframes/frames are not a prerequisite on an attacker controlled page.

Issue #2a STATUS: To be fixed in Flash 10 release. All prior versions of Flash on Firefox on MacOS are particularly vulnerable to camera and microphone monitoring due to security issues allowing the object to be turned opaque or covered up. This fix relies on all users upgrading, and since Flash users are notoriously slow at upgrading, this exploit is expected to persist. Turning off microphone access in the BIOS and unplugging/removing controls to the camera are an alternative. Here is the information directly from Adobe.

Issue #2b STATUS: Unresolved. Flash security settings manager is also particularly vulnerable, allowing the attacker to turn off the security of Flash completely. This includes camera/microphone access as well as cross domain access. Resolved using frame busting code, bug #4 below notwithstanding. However, as pointed out elsewhere, it is possible to directly frame the SWF file example here and here.

Issue #2c STATUS: Fixed in Flash 10 release. All versions of Flash on IE7.0 and IE8.0 could be overlayed by opaque div tags. Using an onmousedown event handler the object click registers as long as the divs are removed by the onmousdown event handler function. Demo here of stealing access to the microphone.

Issue #3 STATUS: To be fixed in the final release candidate. Flash on IE8.0 Beta is persistent across domains (think “ghost in the browser”). This would be a much worse vulnerability except for the fact that it is beta and almost no one is using it.

Issue #4 STATUS: To be fixed in the final release candidate. Framebusting code does not appear to work well on some sites on IE8.0 Beta. Instead it is marked as a popup which is blocked by the browser - disallowing the frame busting code from executing.

Issue #5 STATUS: Unresolved. State of clicks on other domains can be monitored with JavaScript (works best in Internet Explorer but other browsers are vulnerable too) which is cross domain leakage and can allow for more complex multi-click attacks. For example a page that has a check box and a submit button could be subverted upon two successful clickjacks. Additionally, this can make the attack completely seamless to a user by surrendering control of the mouse back to the user once the attack has completed.

Issue #6 STATUS: Unresolved. “Unlikely” XSS vulnerabilities that require onmouseover or onmousedown events on other parts of pages on other domains are suddenly more likely. For example if a webpage has a XSS vulnerability where the only successful attacks are things like onmouseover or onmousedown, etc… on unlikely parts of the page, an attacker can promote those exploits by framing them and placing the mouse cursor directly above the target XSS area. Therefore, otherwise typically uninteresting or unlikely XSS exploits can be made more dangerous.

Issue #7 STATUS: Fixed in current releases post 1.8.1.9. Firefox’s Noscript plugin’s functionality to forbid iframe’s can be subverted by iframing a page that contains a cross domain frame or as Ronald found by using object tags. Giorgio Maone validated the issues and issued patches in future releases of the code as well as other potential clickjacking mitigation. 1.8.1.6, 1.8.1.7, 1.8.1.8, 1.8.1.9, 1.8.2 and 1.8.2.1 all contain anti-clickjacking code. All prior versions to 1.8.1.9 were vulnerable to cross domain clickjacking.

Issue #8 STATUS: Unresolved. Attempts to protect against CSRF using nonces can often be overcome by clickjacking as long as the URL of the page that contains the link that includes the nonce is known. Eg: Google Gadgets exploit discussed in Blackhat “Xploiting Google Gadgets” speech. The only semi-decent defenses against this are to omit the nonces in JavaScript space and also include the frame busting code in the same JavaScript. This will break for users who do not use JavaScript though, so it is not an ideal solution.

From an attacker’s perspective the most important thing is that a) they know where to click and b) they know the URL of the page they want you to click, in the case of cross domain access. So if either one of these two requirements aren’t met, the attack falls down. Frame busting code is the best defense if you run web-servers, if it works (and in our tests it doesn’t always work). I should note some people have mentioned security=restricted as a way to break frame busting code, and that is true, although it also fails to send cookies, which might break any significant attacks against most sites that check credentials.

Thanks to Adobe, Microsoft, Firefox, Apple and the various other vendors and people who have been helpful/supportive and care to fix the issue. Also thanks to the researchers who found these issues independently after Jeremiah and I were unable to do our speech, but kept it to themselves (Arshan Dabirsiaghi, Jerry Hoff, Eduardo Vela, Matthew Matracci, and Liu Die Yu). The clickjacking overview whitepaper has been released here. Source to generic clickjacking code available here. I will keep this post up to date with additional issues and updates as I am aware of them.

Redirection Report

Wednesday, July 16th, 2008

Brian Krebs had an interesting report over at the Washington Post that cited a report from Indiana.edu about how redirects are in quite an abundance. Well, anyone who has worked in this field for any length of time should know that perfectly well, but it’s still interesting to get some validation from the researchers at Indiana.edu who specialize in anti-phishing research. Here’s the rub from Brian’s article:

Indeed, some of the Internet’s biggest Web sites — particularly Google — used to host large numbers of open redirects.

“Used to”? I know I’ve laid it on thick over the last few years, but I’m amazed people still think Google has somehow magically fixed problems that it never got around to fixing. Redirects are not fixed, XSS is not fixed. These issues still exist all over Google and Google’s web properties. But in case someone doesn’t believe me, here’s an example I whipped up in about 10 seconds that redirects to a random eBay auction from Google’s image server as a for instance (make sure you enable JS for the full effect).

It’s good to see people are finally understanding this in the main stream media, but let’s not give credit to companies that are clearly undeserving of it (both historically and currently). I’ll be the first one to stand up and give applause when we see these issues closed once and for all on Google even if it truly is just one company out of the vast untold wealth of sites out there that are vulnerable. But if it really is aiding phishers - and it is - the only way we are going to get ahead of it is by taking responsibility for our own sites. That’s especially true if we intend to be the be all end all of trustworthy advertising giants that Google aims to be.