Cenzic 232 Patent
Paid Advertising
web application security lab

Using Cookies For Selective DoS

August 22nd, 2010

29 posts left…

One of the things Josh Sokol and I talked about in our presentation at Blackhat was a way to use over-sized cookies to cause a DoS on the site. The web server sees the overlong cookie and stops the request from completing. This is not new and has certainly been discussed before. However, one thing that wasn’t discussed is that using the path an attacker can selectively cause the website to stop displaying portions of the site. For instance, if the attacker wants to shut down /javascript/ or /logout.aspx or /reportabuse.aspx or whatever, they can by setting an overly-long cookie for that particular path.

Setting cookies on the target sub domain would require something like header injection/Response splitting, XSS, or a MitM attack. It should be noted though that it doesn’t have to be on the target sub domain - it can be an exploit in another sub domain because cookies don’t follow the same origin policy if the cookie is scoped to the parent domain. In this way an attacker could turn off Clickjacking prevention code (deframing scripts), or turn off other client side protections or parts of the site that are bad from an attacker’s perspective. The only real solution to this is for all browsers to start making the absolute maximum size of cookies smaller than the smallest that web servers will allow (Apache was smaller than IIS by default for instance).

Detection of Parameter Pollution

August 21st, 2010

30 posts left…

There are a lot of web based exploits that can be really tricky to spot if you’re talking about a WAF. Multiple encoding issues, obfuscation and the like… Well, one attack in particular I think is actually pretty easy to detect programmatically (in most cases). In the case of HTTP Parameter Pollution the attacker has to double up on the parameters. So something like: ?a=1&b=2&a=3. If the WAF sees the same parameter (in this case “a”) supplied twice it’s pretty easy to understand that either there was something screwed up or it’s an attack. Either way, it’s worth reporting, and possibly even blocking if you know your site isn’t built like this.

Of course the normal caveats for non-standard parameter delimiters apply (hopefully the WAF could be developed to understand those delimiters in a perfect world). Not to mention the fact that even last week I saw a site that did Parameter Pollution on itself because of shoddy programming (and probably a lot of cutting and pasting by the developer). There could also be cases where some parameters come in on the URL field and others are POST parameters, so that would need to be taken into account as well for systems that don’t care and accept it all as a big pool of parameters. Lastly, I doubt many attackers are actually using Parameter Pollution (yet), but it should be easy enough to catch in most cases.

Detecting some forms of MITM attacks

August 20th, 2010

31 posts left…

There are quite a few different methods of performing MITM attacks, but one in particular kinda struck my fancy early on when I was thinking about airpwn. In the case of airpwn and similar exploits the attacker may be able to listen to the packets being transmitted but they aren’t able to block them, so instead it comes down to a game of beating packets to their source and origin. I don’t know what the prevalence of use of any sort of MITM is, but in this case there are a few things you could do to detect.

Anyway, if you receive double the DNS replies, or double ACK responses for instance, that could indicate that someone is trying to beat another packet back, which is why you’ll end up with two. Of course, figuring out which one is real isn’t straight forward (the bad guy may have just been slow, so it’s the first one that’s real). And there may be other things the bad guy can do like immediately forward a RST packet to the server you’re trying to connect to to quash the double ACK, so this may have some limits of utility.

Perhaps someone could think of another ingenious way to use that information or think of other clever methods of detection based on something similar for the other classes of MITM (like acting as a proxy, or re-routing traffic, etc…). I’m sure someone somewhere has already thought about and posted about this concept, but I wasn’t able to find anything in a cursory search. Maybe it’s new, maybe not, but I still thought it was interesting, even if limited.

Quick Proxy Detection

August 20th, 2010

32 Posts left…

Just a quicky post on how in Firefox you can detect proxies using image tags. Firefox (and possibly other browsers but I first saw it in Firefox) use [ ] to denote IPv6 (I believe that’s it’s original intention anyway) but it also works in IPv4.

Something as simple as http://[123.123.123.123]/img.jpg?unique_id embedded into a page could be used to see if the user is using a proxy, which, as far as I’ve seen - at least using Apache’s proxy, doesn’t understand that syntax and therefore won’t fetch the image. This does give false positives when using something that blocks cross domain requests, and robots that try to stay on the same domain. Anyway, this might be helpful to someone.

The Chilling Effect

August 20th, 2010

As I wind down to 33 posts left until my 1000th and last post, I thought I should spend a little time talking more introspectively about how our community has changed over the years.

When I got started in security I had around the 130th hacker website on earth. We were all linked together with the second webring ever made (for those of you who recall webrings), which is how I know. Incidentally webring was made by a guy in his basement as a college experiment. Bronc Buster got in touch with the guy, which is why we were the second. It was called the Fringe of the Web. Back then sharing knowledge was hard to do. Search engines didn’t exist (DMOZ was really it). No one really trusted one another. No one really knew much because there weren’t many help files or docs being published back then either. I think a lot of people felt like there was a strong possibility they’d land themselves in jail if they were too outspoken about security. For you to get any better you had to do the research yourself because there weren’t many people around to help (at least in my case there weren’t). That was especially true for me because what I was interested in wasn’t being a good sys-admin or network guy and all the docs were about operating system security, firewalls and memory corruption. People were pretty unhelpful with a lot of RTFM, even though the manuals hadn’t been written yet. Installing Debian on my Gateway2000 with my crapola Mitsumi CD ROM for which there were no drivers yet written was my burden alone to figure out. Instead I was interested in this whole newfangled web thing - which almost no one knew anything about. Defacements were the norm - cybercrime was myth reserved for wild eyed paranoids and movies. Let’s call this the dark ages of computer security.

Later the industry dramatically expanded, and instead of there being just north of a hundred sites talking about security, suddenly you’re seeing security related articles and blogs on mainstream press. There are tens of thousands of sites talking about it. There is more new code and ideas being passed around than ever before. No one really feared jail time anymore, which was the only major consequence of publishing code that anyone could come up with. Enter script kiddies and sites devoted to helping people learn about computer security. Cybercrime was just taking off, and everyone realized that this was turning into a business. Companies start acquiring security and we get cool titles like CISO and CSO and we even have our own certifications. We finally had use cases and anecdotes for everything we had been talking about for all these years. Linux starts being sold on commercial desktops. It was the hay-day of computer security. Let’s call this the enlightenment.

In the dark ages of computer security no one released code because they feared jail. In the enlightenment everyone released vulns because they wanted to make a name for themselves and prove their skill. So where does that leave us today? Let’s take an example of a hypothetical young web application and browser security guy (think me but just starting out) with no background or history in the industry. We’ll call him “Todd.”

Let’s say Todd releases a browser vuln that is useful against a good chunk of browsers, but it’s an architectural flaw and one that won’t be fixed for many years to come because if it is fixed it’ll break other things. It’s not a desktop compromise type issue, it’s just allows attackers to harm most websites in some obscure way (think the next version of CSRF or XSS or Clickjacking or whatever). Todd, not knowing what to do or who to talk to releases the vuln to make a name for himself and to help close down the hole, because he thinks that’s the right thing to do. Here are some possibilities:

  • The Vendor is pissed at Todd for releasing the vuln and not telling them first - especially since there’s no fix. You evil vulnerability pimp you!
  • The press asks the simple question, “Why did you release this when you knew there was no fix?” to which Todd has no good answer except he thought he was doing the right thing by letting people know - and then the press mis-quotes him.
  • The blackhat community is pissed because they have been using something similar (or not) but either way they know this cool trick has a limited lifespan now thanks to Todd. More importantly they’ll try to hack Todd for releasing it. There will be much fist shaking and cursing of Todd’s name the day the vuln gets closed too.
  • The elite crowd are annoyed because they don’t think Todd should have gotten any publicity. The elite kernel level bug is way sexier (and it may very well be) and takes more skill (quite possible as well), but Todd knows nothing about the politics of the industry - he’s just interested in his stuff. They may try to hack and drop Todd’s docs to shut him up. There’s only so much limelight to go around, after all. Incidentally, I don’t think most guys who work on these types of vulns are like this, but it only takes a few to deter someone new like Todd.
  • There’s a slim chance someone might offer him a 9-5 job - as long as the vendor isn’t one of their clients.

Now let’s take the flip side - what if he wants to sell it:

  • The vendor won’t pay for an architectural bug - only full machine compromises please!
  • The blackhats won’t pay for it, because it doesn’t give them a shell.

So where does that leave Todd? It’s not in his best interest to release the vuln, because of the externalities of negative pressure, and no one is buying either. How does Todd make a name for himself? More importantly, how does he survive? Why on earth would Todd give up his vuln for free? He knows he could do some major damage with it, but the elite aren’t impressed so he doesn’t even get clout. Perhaps there’s a slim chance the vendor might hire him in gratitude? That’s a long shot and a waste of a great find for the chance at a 9-5 in the boiler room. Instead why wouldn’t Todd say screw it entirely and either stop doing the research and find something else to do or become bad and make some real cash? The chilling effect is in full swing. We are quite squarely headed towards another information security dark age. Sure there are a lot of good documents (if dated) on the web still. The bulk of advisories are from vendors these days, so you’ll still be up on yesterday’s news and patch management will be your life. Private conversations will always continue, but it won’t ever be like the enlightenment again unless something changes. I spoke with two large vendors about this and they acknowledged their part in it and that indeed they offered no good solution for someone like Todd who hadn’t already established himself - except the vague hope of some consulting arrangement.

I spoke with one guy who buys vulns and I asked him who his buyers were, out of curiosity. I was expecting him to say some large software retailers, but he said, “No, no, not at all. Most of my buyers are consulting companies.” I was confused. It turns out that there are a slew of consulting companies that will fail a pen-test with a client, but they can’t show the client that they found nothing, so they’ll whip out a ready-made 0day, impress the client and then they can go on the speaking circuit about their amazing find. Call me naive but it never even occurred to me that this industry could be that messed up. If you see someone speaking at a conference about some memory corruption flaw but they can’t seem to explain their own vuln the way you’d expect them to - you may have found one of these consultants.

I think this is important because as my tenure comes to a close in the blogging world, I feel like there are a lot of very talented people who will never get to see their day in the sun and as an unfortunate consequence of this vulnerability market some talentless people will. I know several people have completely packed up and decided to get out of the industry entirely because of how things are shaping up. I fear that the way things are headed it will be harder and harder for someone to rise to the top, without retribution from their peers. There is a whole new generation of people who are lining up to replace guys like me who are joining a very corrupt and preservationist industry. They may not have thick skin and may not survive what is in store for them. I’ve talked to over a dozen security folks who tell me the same story. These individuals worry about the security community’s reaction to anything these individuals say publicly more than they worry about actual bad guys committing crime. Is it too late to fix, or is it even worth fixing? Or would you argue that this is the best it’s ever been? I’d be curious to hear what people think.

Hill-Billies: A Case Study

August 16th, 2010

34 posts until the end… Oh, and happy Monday. It’s time for a little story.

Once upon a time there were some hill-billies living in the deep south. They had virtually nothing. They made their moonshine, and lived the most meager of lifestyles. They were in deep poverty. They made do with their hooch and stories. They worked hard - 8 hours per day at the local sweatshop, but they were happy enough. Then one day, an advocate for minimum wage increase saw what the hill-billies were living in and how they were living their lives. It made the advocate angry and they went to go fight the local sweatshop to increase their wages. The advocate wanted to make sweeping changes and would use the hill-billies as a case study on how much a little extra money can improve someone’s living standard to further the advocate’s cause.

Eventually, after intense scrutiny, the sweatshop realized that they had indeed been paying too little for any decent standard of living and decided to give all their minimum wage workers a rate increase, which included our friends the hill-billies. So now you’re thinking to yourself, the hill-billies got a home-loan or used the money to pay for school or something else productive, right? No… what happened was that the hill billies had always been happy with what they had, and the increase in money allowed them to stop working as much and make the same amount. They continued to make their moonshine and lived happily within their means…

The moral of the story is that about a year ago I reached an inflection point in my career of 15 years in security. I realized that with every major innovation the security community comes up with, the general public and vendors alike figure out a way to abuse that innovation or work around it to do what they originally wanted to do again (think firewalls and tunneling over port 80). It feels like we’ve been battling to protect people, but the people don’t want to be protected if it means changing. They’re happy with the status quo. Of course, there’s always fear of the unknown, and fear of insecurity is a key driver of spending (think anti-virus). One thing’s for sure though, you can’t change the nature of the hill-billies, so why are we trying? Our only path to success is empowering people to do what they want, without getting in the way. The words “No” and “Can’t” have to leave our vocabulary when it comes to what consumers and developers and companies want to do. Now, the trick is: how do we build security that no one notices is there?

Removing Entropy From PHP Session IDs

August 12th, 2010

35 posts remaining…

Samy is awesome. If you missed his preso at Blackhat and DefCon, you missed out. You should try to get the DVD just to hear him. It’s hilarious. I’m not just saying that because he was using me as a fake case study or anything, it really was hilarious. Anyway, we got to talking and it occurred to me that it wasn’t super easy to automate his PHP session ID attack because it requires some social engineering to get the IP address of the user that you want to hijack. Well, after thinking I think I came up with a way around that in some cases.

There are a ton of sites these days that use load-balancers in front of them. There’s a few ways they can be installed - completely transparent or acting more like a proxy. The proxy is the more common setup but it has one pretty huge negative side-effect, all the IP addresses come to the server as just one - the internal IP of the load balancer. Normally that’s not a huge deal because the load-balancer does the logging or it sets some custom HTTP header that is properly logged. But PHP doesn’t know about any of that - it’s dumb. It’ll take whatever value it sees as the IP address and apply it to the session ID algorithm. So now instead of having to guess the entire IP space of the Internet, you now have to just guess RFC1918 - and probably realistically a much smaller slice of that in most cases.

Although that setup is pretty common, there is still one drawback. For Samy’s exploit to work you need to know when someone logged in (down to the second, preferably) to remove enough entropy to make it worthwhile to attack. So this still isn’t easily turned into an automated exploit, but we’re slowly but surely getting there.

Aero Theme and Generic Semi-Transparency Info Leakages

August 10th, 2010

36 posts left, and counting…

If I had more time on my hands this would have been a fun one to play with. Although OWASP has dropped information leakage from their list of top 10, it’s still a fun puzzle to put together if you can gather tidbits of info. Johnny Long specializes in piecing together small seemingly inconsequential pieces of info against a target. I wish I could find the paper on it, but many years ago there was a paper describing one technique to unblur text in an image. The basic technique, if memory serves, was that you could take each character in the font in question, blur that font and see what it ended up looking like. By comparing the blur your just created with the blur in the image you could figure out each character.

When Vista came out it shipped with a default theme called Aero, which made semi-transparent windows. The semi-transparency uses both an overlay of a dithered color scheme as well as blur. The dither may be the harder of the two to overcome because it’s dithered based on the width of the window itself and it changes depending on the focus of the window in question. The blur, however, is probably the easiest. Windows uses a default font for most applications. Therefore it should be fairly easy to de-obfuscate text that is behind screen-shots of the Aero theme.

There are obviously problems with this - the first one being that it’s not the whole window that’s transparent, only a slice on the top, but I’ve found some vaguely interesting things that were definitely not meant to be in scope of the screenshot through the Aero transparency. The second problem is that this only really helps if the thing behind the screenshot is actually of interest. But, let’s assume that those issues are met. The nice thing is the kind of people who tend to post screenshots are experts in their field. They’re often public speakers, analysts, or people who are giving instructions on how to use something. So there could be quite a bit of sensitive information in those screenshots. I only spent an hour or so trolling screenshots one day and found a few vaguely interesting peices of info.

I have sat on this concept for a few years, hoping someone would come out with it first, but I haven’t seen anything written on it, so here it is. Either way, it’s probably a minor in reality, but I recommend turning off Aero and all transparency when possible - especially if you’re like me and have to give a lot of presentations that include screen-shots of desktop applications (E.g. browsers).

Petabytes On the Cheap

July 21st, 2010

37 posts remaining…

With all the talk about cloud computing I thought it would be interesting to post this article. It turns out you can create a single chassis that contains around 67 terabytes in it for $7,867. That’s pretty incredible, and most interestingly, if you follow the link, you’ll see the cost breakdown compared with other alternatives which it pretty much blows away. It almost doesn’t make any cost sense to outsource your storage to the cloud with those cost savings. It really can be cheaper to bring it in house.

Now there are some down-sides, and they primarily have to do with high availability. There’s a good article explaining some of the potential downsides although id told me, “port multiplier doesn’t matter there, even 2-1 oversubscribed they are fine for doing what they are meant to” so take the criticism with a grain of salt and do your own fact checking. Either way, the cost savings are so dramatic, this could be an evolutionary step and I bet things will get a lot more solid down the road to elevate the issues of availability. So it might be premature to jump into this kind of storage for those massive databases you’re supporting, but given a little time and increased density I bet this technology makes a huge difference in cost down the road.

As a side note, for you people who were around for a while, I did some quick math - it would take just north of 46.5 billion floppies to equal that one 4U box. Also, as a fun fact most smart-cell phones these days are faster than the machine that we started ha.ckers.org on. Amazing how times have changed!

Some Possible Insights into Geo-Economics of Security

July 21st, 2010

38 more posts left…

I first started thinking about this when I talked to a friend from Vietnam a year or so ago regarding his CISSP. Once upon a time it was nearly impossible to find someone in Vietnam with a CISSP. At first I thought he was making some sort of joke about the usefulness of the certificate, but for some things in Vietnam it’s really a hot commodity. It turns out that the cost of living there makes a CISSP almost totally not worth it. Even though it’s expensive in the United States (where I live) respective to the wages in Vietnam it’s weeks or even a month worth of work. Therefore the rate at which a certificate would be awarded is less, not because of skill, know-how or anything else. It’s purely economics. Slowly that has changed and more people now have it than before in Vietnam, but it’s still not equal as a percentage compared to the USA, for instance, from what I was told.

That got me thinking about other issues that are relatively the same. For instance SSL/TLS certificates. Buying a certificate to allow for transport security is a good idea if you’re worried about man in the middle attacks. Yes, that’s true even despite what I’m going to tell you in my Blackhat presentation where Josh Sokol and I will be discussing 24 different issues of varying severity with plugins and browsers in general. But when you’re in another country where the cost of running your website is a significant investment compared to the United States, suddenly the fees associated with the risks are totally lopsided. So this may be why you might see a lower adoption rate of certificates in certain regions. More importantly there really is no long term reason the security industry can’t create a free certificate authority (over DNSSEC for instance) that provides all the same security or more even without the costs - therefor making it a more equal playing field.

Lastly I started thinking about bug bounties and how they work almost opposite. Unlike security, where the cost is high for playing, hacking can be much more lucrative based on your geo-economic situation. For instance, a $3000 bug bounty for something that takes two weeks to work on equates to a $78k a year job if you can be consistent. In the United States for a skilled researcher that’s barely worth the time. But in a country where the average income is closer to $10k a year, something like this might highly incentivize researchers to focus on attack verses defense, which few can afford. Anyway, I thought it was an interesting concept that may play out entirely different in reality, but it was a fun thought exercise.