Cenzic 232 Patent
Paid Advertising
web application security lab

DNS Rebinding In Java Is Back

October 20th, 2010

9 posts remaining…

Stefano Di Paola has an interesting article about DNS Rebinding in Java. Apparently he’s found a way to bring back some of the older exploits that were supposedly fixed in Java back in 2007-2008 timeframe. Really cool read. Half way through reading it I realized that this would enable exploits like the one where sites often have localhost.whatever.com tied back to 127.0.0.1. The old exploit worked in that if you could ever find an XSS in a local service you could set cookies for whatever.com domain, or read any cookies that were set to the entire domain. It’s a nasty exploit, but rare because there don’t tend to be a lot of local services installed on desktop computers that are vulnerable to XSS by default.

Then I kept reading and he enumerates that exact use case - great minds think alike! Anyway, this apparently will be fixed in a future update, but now that we’ve seen DNS rebinding hit Java twice, I think Java needs to have a much more critical eye. Things like this shouldn’t be sitting around for years before they’re noticed. Like inter-protocol exploitation this research needs a lot more eyes. Great work by Stefano!

Least Common Denominator

October 20th, 2010

10 posts left…

While at Bluehat Jeremiah got a question from someone (I believe he worked at Opera) saying that even something as simple as turning off third party cookies will break things like Yandex. Jer had an amusing response which was, “What’s that?” followed by, “So you’re telling me I need to be less secure because someone else wants to go to a site that I’ve never heard of?” I was laughing too hard to hear whether the guy had a useful retort or not. But I doubt the guy in the audience was prepared for this argument. Now some people would argue that no, it’s your own responsibility to secure your browser as much as you need it to be. It’s always been my take that if you let people have something insecure it’s never going to get any more secure than it is that day (for the vast majority of users), because of the least common denominator and the fact that the web developers are going to use as much of that functionality as they can - forcing me to use JavaScript to log into my bank and such.

Normal users want a subset of what the browser is capable of, but even more usability than what a browser comes with by default. If they can tie their browser in with Twitter, make it auto-log-in to every account they have and pipe in music from iTunes all at once, that’s a good day. While security people for the most part want a different subset of the browser, and want very few of the usability improvements that browsers are adding in. Unfortunately, we are also stuck with whatever everyone else wants, because we do have to use the same sites. And the worst part is the browsers weren’t designed with guys like Jeremiah in mind - they were designed with thoughts of people who had never used a computer before. As such the browsers are building on legacy software that needs to support other legacy software atop a very flexible architecture making it harder and harder to be secure over time.

As such, yes, Jeremiah is absolutely forced to have a less secure browsing experience because of Yandex and the 1000x other edge cases that we have been unable to break for fear of backlash. This includes breaking requests to localhost because of Google Desktop. This includes breaking cross zone RFC1918 requests because of legacy banking apps. All kinds of dumb things that should have never been built like that are causing us to be less secure, and until we’re willing to break the web (like with the CSS History hack fix that Mozilla championed) we’re going to be stuck with the least common denominator problem. I wish I had the answer, but I don’t.

Performance Primitives

October 20th, 2010

11 more posts left…

While I was out at Bluehat I ended up having some good meetings between Intel, Mozilla and Adobe. How are these companies related, you may ask? Well all of them care about performance. A year or so ago I was hanging out with the Intel guys and they informed me that they have a series of low level performance primitives that they surface through APIs. At the time I wasn’t quite sure what to make of it. Security and performance aren’t natural bedfellows - or at least I didn’t think so at the time.

I got to talking with both Microsoft and Mozilla last week about the need for default Adblocking software built into the browser. Jeremiah thinks it should be opt-out and I think it should be opt-in, but either way, I think we’re coming to a consensus that it should be automatically part of the browser in some form. Mozilla was the first to give me a real reason it may be a problem other than it hurting Google, who is their biggest sponsor. The reason is performance. Adblockplus, as an example uses partial string regex which is a performance hog. To put that in the browser by default would really make people’s experience suffer. Then it occurred to me that I had had a conversation about performance with Intel a year before. The answer, my friends, lies in primitives.

Currently Intel supports a subset of basic math functions and Perl’s version of regex. Well, in a future version the chips could support things like the JavaScript version of regex, and other primitives involved in decision making and image/vector rendering and so on that are used within the browser. Adobe is in the same boat - although probably a different subset of primitives would be desirable. Then the idea sprang up to use these primitives within Visual Studio itself to get more generic/native improvements to performance without developers having to know anything about the chip. Intel doesn’t tend to market these concepts very well, despite how interesting they could be, but only a few people have to know to make a big difference.

So now the real question isn’t whether these companies will pick up on this technology now that they know about it - that’s a given. The real question then is once they get a performance boost are they going to use some of it to improve security or are they just going to tout themselves as the fastest? At some point we have to stop and ask ourselves how fast do we really have to get before we start using some of that processing power to make people safer instead? One can only hope…

Odds, Disclosure, Etc…

September 14th, 2010

12 posts left…

While doing some research I happened across an old post of mine that I had totally forgotten about. It was an old post about betting on the chances of compromise. Specifically I was asked to give odds against whether I thought Google or ha.ckers.org would survive a penetration test (ultimately leading to disclosure of data). Given that both Google and ha.ckers.org are under constant attack, it stands to reason that sitting in the ecosystem is virtually the equivalent of a penetration test every day. I wasn’t counting things like little bugs that are disclosed in our sites, I was specifically counting only data compromise.

There are a few interesting things about this post, looking back 4 years. The first thing is that pretty much everything I predicted came true in regards to Google:

… their corporate intranet is strewn with varying operating systems, with outdated versions of varying browsers. Ouch. Allowing access from the intranet out to the Internet is a recipe for disaster …

So yes, this is damned near how Google was compromised. However, there’s one very important thing, if I want to be completely honest, that I didn’t understand back then. I gave Google a 1:300 (against) odds on being hacked before ha.ckers.org would be. While I was right, in hindsight, I’d have to change my odds. I should have given it more like 1:30. The important part that I missed was the disclosure piece. Any rational person would assume that Google has had infections before (as has any large corporation that doesn’t retain tight controls over their environment). That’s nothing new - and not what I was talking about anyway. I was talking only about publicly known disclosures of data compromise.

So the part that I didn’t talk to, and the part that is the most interesting is that Google actually disclosed the hack. Now if we were to go back in time and you were to tell me that Google would get hacked into and then disclose that information voluntarily, I would have called BS. Now the cynics might say that Google had no choice - that too many people already knew, and it was either tell the world or have someone out you in a messy way. But that’s irrelevant. I still wouldn’t have predicted it.

So that brings me to the point of the post (as you can hopefully see, this is not a Google bashing post or an I told you so post). I went to Data Loss DB the other day and I noticed an interesting downward trend over the last two years. It could be due to a lot of things. Maybe people are losing their laptops less or maybe hackers have decided to slow down all that hacking they were doing. No, I suspect it’s because in the dawn of social networking and collective thinking, companies fear disclosure more than ever before. They don’t want to have a social uprising against them when people find out their information has been copied off. Since I have no data to back it up, I have a question for all the people who are involved in disclosing or recovering from security events. How many compromises of data security, that you are aware of, have been disclosed to the public as a percentage? You don’t have to post under your own name - I just want to get some idea of what other people are seeing.

If my intuition is correct, this points to the same or more breaches than ever before, but less and less public scrutiny and awareness of what happened to the public’s information. Perhaps this points to a lack of good whistle-blower laws against failing to disclose compromises (and monetary incentives for good Samaritans to do so). Or perhaps this points to a more scary reality where the bad-guys have all the compromised machines and data that they need for the moment. Either way, it’s a very interesting downward trend in the public stats that seems incongruent to what I hear when I talk to people. Is the industry really seeing less successful attacks than a few years ago?

Bear In Woods Or Prairie Dog Ecosystem

September 9th, 2010

13 posts left…

The post I did a few days ago apparently resonated with a lot of people. So I decided to do a quick follow up. If a true ecosystem is not like two guys being chased by a bear in the woods, what is it like? I think the closest real life analogy I can come up with is the humble prairie dog. This is not a hero most people want to liken themselves to, typically. It’s more vermin than role model. But one thing is undeniable - they are a tremendously successful species that have next to no defense mechanisms. So how do they succeed when the fox is on the hunt?

Before I answer that, it’s important to know that prairie dogs aren’t exactly the most friendly beasts to other prey animals that compete for food - like rabbits or ground squirrels and so on. So those animals are not welcome in the prairie dog’s holes in times of plenty. Much in the same way executives are territorial about their intellectual property. But once a real predator, like a fox or a hawk is spotted everything changes. Now the prairie dog will let any prey animals into their holes that can fit, regardless of the fact they may be in competition normally. Now the prairie dog is strong enough to shove out many of those smaller creatures that seek refuge and let them get eaten, thereby removing one competitor, but they don’t and here’s why.

Predators need food to survive (think of a predator as a hacker that profits off of cyber crime in this analogy). If the prairie dog shoves their competitors out to be eaten, now the predator has been sustained. Every time the predator eats they gain enough strength to hunt again and possibly even produce offspring. This works completely contrary to the prairie dog’s goals. No, evolutionarily, the humble prairie dog, who has the biggest hole around, has learned that it’s better to save your competitors to starve your attacker. Starving the predator so they move on or die works much better over the long haul.

The last thing the prairie dog wants is more hawks around, even if that means the prairie dog would be in less competition for food from the other prey animals. Of course, I don’t expect executives to be as smart a rodent right off the bat. But maybe they don’t have to - maybe evolutionary forces are at work even as we speak - and those who fail to cooperate are being eaten. Meanwhile the attacker community grows to whatever the prey companies will support (monetarily or in terms of intellectual property or whatever currency the attacker trades in). There will always be predators in the wild, but the numbers can be limited when the prey work together.

Cookie Expiration

September 8th, 2010

14 posts left…

Day 1 at the OWASP conference in Irvine. Lots of good people here, and tons of good conversations. Talking with Jeremiah from Whitehat and Sid Stamm from Mozilla reminded me that I wanted to talk about cookie expiration. I’m only talking for myself here, and not the average user - but I really dislike the concept of persistent cookies. If I wanted something to persist, I wouldn’t use sandboxes, and violently and regularly clean my cookies by hand. Yet still - cookies persist way too long. Realistically there’s two types of attacks that involve the persistence of cookies. The first is a drive by opportunistic exploit - let’s say you’re on a porn site and it forces your browser to visit MySpace or Facebook and because you’re probably logged in, boom, your compromised via CSRF or clickjacking or whatever. The second is where the attacker knows you’re logged in because they’re attacking you through the very platform that they intend to compromise (likejacking is a good example).

Although we can’t do much about the second case, the first case it comes down to cookie expiration in large part. Why should a browser hold onto a cookie just because the site told it to? If I’m not actively sending requests to the site in question there’s a good chance I don’t want my browser to send cookies after X amount of time. In my case, X is probably an hour or two max (considering I take lunches). Maybe some people would argue that they don’t want to be hassled by typing their webmail password in more than once per day. Okay, fine, but the point is the magic number probably isn’t once every two weeks, or once a month or once every 20 years, for most security people (I’d hope). So perhaps we need to consider a default mechanism for timing cookies out when they’re not actively being sent to the server, regardless of what the server wants. Incidentally, Sid thinks this would make a good addon. Takers?

The Effect of Snakeoil Security

September 4th, 2010

15 posts left…

I’ve talked about this a few times over the years during various presentations but I wanted to document it here as well. It’s a concept that I’ve been wrestling with for 7+ years and I don’t think I’ve made any headway in convincing anyone, beyond a few head nods. Bad security isn’t just bad because it allows you to be exploited. It’s also a long term cost center. But more interestingly, even the most worthless security tools can be proven to “work” if you look at the numbers. Here’s how.

Let’s say hypothetically that you have only two banks in the entire world: banka.com and bankb.com. Let’s say Snakoil salesman goes up to banka.com and convinces banka.com to try their product. Banka.com is thinking that they are seeing increased fraud (as is the whole industry), and they’re willing to try anything for a few months. Worst case they can always get rid of it if it doesn’t do anything. So they implement Snakeoil into their site. The bad guy takes one look at the Snakeoil and shrugs. Is it worth bothering to figure out how banka.com security works and potentially having to modify their code? Nah, why not just focus on bankb.com double up the fraud, and continue doing the exact same thing they were doing before?

Suddenly banka.com is free of fraud. Snakeoil works, they find! They happily let the Snakeoil salesman use them as a use case. So our Snakeoil salesman goes across the street to bankb.com. Bankb.com has seen a two fold increase in fraud over the last few months (all of banka.com’s fraud plus their own), strangely and they’re desperate to do something about it. Snakeoil salesman is happy to show them how much banka.com has decreased their fraud just by buying their shoddy product. Bankb.com is desperate so they say fine and hand over the cash.

Suddenly the bad guy is presented with a problem. He’s got to find a way around this whole Snakeoil software or he’ll be out of business. So he invests a few hours, finds an easy way around it and voila. Back in business. So the bad guy again diversifies his fraud across both banks again. Banka.com sees an increase in fraud back to the old days, which can’t be correlated to anything having to do with the Snakeoil product. Bankb.com sees their fraud drop immediately after having installed the Snakeoil therefore proving that it works twice if you just look at the numbers.

Meanwhile what has happened? Are the users safer? No, and in fact, in some cases it may even make the users less safe (incidentally, we did manage finally stop AcuTrust as the company is completely gone now). Has this stopped the attacker? Only long enough to work around it. What’s the net effect? The two banks are now spending money on a product that does nothing but they are now convinced that it is saving them from huge amounts of fraud. They have the numbers to back it up - although the numbers are only half the story. Now there’s less money to spend on real security measures. Of course, if you look at it from either bank’s perspective the product did save them and they’ll vehemently disagree that the product doesn’t work, but it also created the problem that it solved in the case of bankb.com (double the fraud).

This goes back to the bear in the woods analogy that I personally hate. The story goes that you don’t have to run faster than the bear, you just have to run faster than the guy next to you. While that’s a funny story, that only works if there are two people and you only encounter one bear. In a true ecosystem you have many many people in the same business, and you have many attackers. If you leave your competitor(s) out to dry that may seem good for you in the short term, but in reality you’re feeding your attacker(s). Ultimately you are allowing the attacker ecosystem to thrive by not reducing the total amount of fraud globally. Yes, this means if you really care about fixing your own problem you have to help your competitors. Think about the bear analogy again. If you feed the guy next to you to the bear, now the bear is satiated. That’s great for a while, and you’re safe. But when the bear is hungry again, guess who he’s going after? You’re much better off working together to kill or scare off the bear in that analogy.

Of course if you’re a short-timer CSO who just wants to have a quick win, guess which option you’ll be going for? Jeremiah had a good insight about why better security is rarely implemented and/or sweeping security changes are rare inside big companies. CSOs are typically only around for a few years. They want to go in, make a big win, and get out before anything big breaks or they get hacked into. After a few years they can no longer blame their predecessor either. They have no incentive to make things right, or go for huge wins. Those wins come with too much risk, and they don’t want their name attached to a fiasco. No, they’re better off doing little to nothing, with a few minor wins that they can put on their resume. It’s a little disheartening, but you can probably tell which CSOs are which by how long they’ve stayed put and by the scale of what they’ve accomplished.

Browser Detection Autopwn, etc…

September 4th, 2010

16 posts left…

I often find myself thinking about egyp7’s DefCon speech last year. He was talking about browser autopwn, which was a relatively new concept at that time being built into Metasploit. Pretty cool technology, and with only one minor mishap he was able to demonstrate it on stage with impressive results. That’s all well and fine, and you should check it out, but one thing stuck out from the presentation more than the technology itself.

By doing variable detection he could find out everything down to the individual patch level of the device in most cases. Of course a bad guy can mess with these variables and lie, which egyp7 admitted to. But, wisely he said something to the effect that if you find a browser that is lying about it’s user agent, you probably have found yourself a browser hacker, and you don’t want to try to be owning his browser anyway. Once you find yourself in this condition, bail. The idea mirrors a lot of the type of stuff I wrote about in Detecting Malice. By identifying the signature of browsers and how people navigate sites you can know a lot about your potential adversary. Either for good or, in the case of autopwn, evil. Growing this signature database over time could be very useful as attention on browser exploitation increases and the need for understanding user traffic and intent grows.

The Perils of Speeding up the Browser

September 3rd, 2010

17 posts left until the end…

A year or so ago I went to go visit the Intel guys at their internal conference that they throw (similar to Microsoft’s Bluehat). I honestly had no idea what to tell a bunch of hardware guys. What correlation does chip manufacturing really have with browsers or webapps. Well virtualization and malware certainly, but what else? It got me thinking… one of the things they are in direct control over is how fast operating systems (and subsequently browsers) work. I talked it over with id before going out there. Faster is better right?

I’ve got mixed feelings about fast vs slow browsers. When something is slow, you can actually detect that something strange is going on. It’s also easier to stop it from mis-behaving if an attack takes a while. When it’s fast, it’s much harder to notice that your computer had to chug for a while to do something complex and much less likely that a user can intervene. There have been a number of exploits out there that have really been proof of concept only. They’re deemed not practical because they take too long, or hang the browser temporarily while they’re being executed. If the speed barrier is removed, then suddenly those old proof of concepts (think res:// timing attacks and so on) are actually much easier to perform. So while I think innovation and performance improvement is a good thing overall, it does come with some unintended consequences.

Browser Differences, Minutia Et Al…

September 3rd, 2010

18 posts left…

I got an email last night from someone asking me to do a breakdown of which browser is better, Internet Explorer, Firefox, Opera, Safari and Chrome. First of all, there’s already a pretty good reference that Michal Zalewski put together. Like anything this comprehensive, since it’s not been edited for about half a year it’s already out of date in a few ways, but it’s a great place to get started for those who want to get familiar with the internal differences between various browsers. No need to re-invent the wheel, go read it. Now, that’s the purely technical side, but there is one thing that’s wildly missing from most documents that talk about browser security.

Browser security often turns into a religious war amongst technologists, instead of thinking about it pragmatically. What are the real motives of the companies that are developing the browsers? In most cases they care primarily about market share because market share makes them money (through search engine agreements, and so on). So now you have to think about yourself and your needs. What kind of user are you? I tend to be a very security conscious person, and if you’re reading this you probably are too. I’m willing to severely degrade my usability for an increase in security, whereas most users are not. So the browser I will tend towards is one that offers me the flexibility to make those decisions for myself while still giving me enough usability to be able to do anything I need to do, when I decide to. This is why Firefox has been my personal browser of choice for years - but don’t be confused and think it’s because I think Firefox is more secure out of the box. Firefox has just as many flaws as other browsers, by default.

While security people’s needs are important, if you look at the number of people who are security folks compared to the rest of the world, we are insignificant as a percentage. That means that it is not in the browser company’s interest to focus on appeasing security people. Sure, it’s nice to have a browser that is secure, but that’s not ever going to drive the volume of users necessary to make the real revenue for their organizations - or at least that’s what the market seems to be proving. Plus most of the major browsers above tout themselves as being more secure than their competitors - so normal consumers don’t know who to believe. As such, while I think all the major browsers mentioned above have their pros and cons, none of them are designed with security first. They’re designed for a different set of users in mind (which includes security people, but it also includes our grandmas, and tweens and cousin Cletus), and that puts browser design choices somewhat at odds with security, because what does Cletus care or know about security? So that’s where plugins, addons, sandboxes, VMs, etc… come into play. It’s like wearing a condom around your browser, if you like. It gives us the ability to use the same underlying product while still protecting ourselves as much as possible.

I honestly think most browsers can be made to be very secure, if you’re willing to sacrifice all usability - not completely secure, no doubt, but far more secure than any of the major browsers above ship by default. So, it’s a little hard for me to play favorites. They each have their own security mess to clean up, so currently there is no good solution, and I don’t recommend any browsers to anyone (although you people still on IE6 really should upgrade already). The work involved in really securing your browser simply isn’t worth explaining to most people. In fact, “which browser do you use” is my least favorite question, because it’s not as simple as a single word. Boutique browsers, while interesting, don’t often have the support behind them to make them useful for a lot of the more common applications (lacking vast plugin support, etc…) although of anyone, they actually could align themselves nicely with the needs of security people. So, while I think browser security is often about minutia, we need to fully grasp the market forces at work before getting completely fed up by a constant string of functionality that only makes it less secure, instead of expecting dramatic security improvements. Or we need to pick something more obscure and assume the risks involved with a product that is not tried and true. It’s not an easy problem for us or the browser companies - I don’t envy their situation.