Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'Random Security' Category

Cheating Part 1

Sunday, November 21st, 2010

6 posts left…

I just thought I’d write a few vaguely amusing posts having just come back from Abu Dhabi (Blackhat) and Brazil (OWASP). A few weeks back my Wife was having a rather fancy soiree work party that also had a casino night attached to it. I was pretty annoyed about the whole work party thing, having rarely had a good time at these things in the past. So immediately I start looking for ways to entertain myself. Well upon entering they gave us both a ticket which we could turn in for $500 in chips. Then they said, “For every $100 in chips you turn in we’ll give you a ticket.” Immediately I saw a fault, “So if I give you $500 in chips you’ll give me 5 tickets… and if I give you 5 tickets you’ll give me $2500 in chips? Do you see a problem with that?” My wife was instantly annoyed - she knows full well I’ll ruin the whole night for everyone if I start cheating. So she tells me I’m not allowed to do that. Okay, maybe I just shouldn’t have said it out loud. I just love cheating at games and she knows it.

So I take my ticket and my Wife’s ticket who has decided to ditch me to talk to her work friends while I do the casino night thing. They give me $1000 in chips in exchange, and a few caveats. The first is that there are three prizes at the end of the night, an iPad, and two Flip videos and the drawing is after the casino closes at 10:30. They also tell me that I can buy back in at any time for $20 and get another $500 in chips. Fair enough. So I peruse the various games. Roulette - a fast game but crap for odds. Poker, a man’s game, with good odds if you’re good at playing, but way too slow. Blackjack - ahh, perfect. Blackjack has good odds, it’s fast, and it’s also social, so I can at least talk to some people while I play. Plus it doesn’t hurt that id was a professional Blackjack player for years and taught me everything I know about it.

So I start playing Blackjack and I realize right away two very important things about the dealer. First - she’s very good - Vegas quality good. The second is that she doesn’t care at all about her job. I see her bury cards when it’ll bust someone, even when they don’t notice or particularly care. She’s doing it so slyly though that I’m the only one who’s noticing. So I call her out on it in a good way and tell her she should work in Vegas. Well it turns out that she used to, and we hit it off. I notice that she starts helping me out too. So I vary my bet and start increasing my winnings from $25 per hand to $100 then $500 and eventually $1000 or more a hand. Meanwhile I’m trying to help other people by paying to get them to split when they should - making a few thousand for other players here and there. I know the dealer appreciated that because happy customers makes for bigger tips.

Now this dealer, for the most part is in a $1-5 tip situation per person. I realize by the end of the night I’ll probably end up with the highest chip count by at least three times, so I tell her to let me know when she’s going to play the last hand. The last hand comes and I give her a $20 bill as a tip, partly because she had made the night so much fun when I had had such low expectations of the whole thing, and partly because I knew it would help me in the last hand. Of course she was very thankful. So on my last hand of the night I bet $7000. She intentionally busted herself out and instead of paying me $7000, she paid me $10,000.

So at this point I’m at $22,000 and change in chips with the next highest player that I could see being at $4,000. So I give her a big hug goodbye because she had just made the whole thing that much better. Then she slipped me two more $10,000 chips, for a grand total of $42,000 and change in chips. So I am more than 10x higher than the next highest player. That sounds all well and good, except now I have to convert a relatively small number of high value chips into tickets. So a huge line builds up as a half dozen volunteers have to sit there and rip up 420 tickets. It took a lot longer than I had expected and people were starting to get pissed. And rightfully so since I was basically guaranteed to win something or everything at that point. So I settled for closer to 300 tickets, just so I could get out of there without getting on my Wife’s company’s shit-list. And here’s the smarmy talk show host picture of it:

Surprisingly my Wife was actually amused by the whole thing, because she’s usually annoyed by my antics. For $20 we ended up winning a Flip and I had the best time I’ve ever had at one of those stupid work parties. If I had tried to buy the tickets using their assigned value in actual cash it would have taken $1,680 - a pretty expensive Flip if you ask me. The amusing questions went along the lines of, “What did you play?” followed by, “Man, I should have played Blackjack! All I got was $800 in chips.” My Wife says that I’m a dick.

Odds, Disclosure, Etc…

Tuesday, September 14th, 2010

12 posts left…

While doing some research I happened across an old post of mine that I had totally forgotten about. It was an old post about betting on the chances of compromise. Specifically I was asked to give odds against whether I thought Google or ha.ckers.org would survive a penetration test (ultimately leading to disclosure of data). Given that both Google and ha.ckers.org are under constant attack, it stands to reason that sitting in the ecosystem is virtually the equivalent of a penetration test every day. I wasn’t counting things like little bugs that are disclosed in our sites, I was specifically counting only data compromise.

There are a few interesting things about this post, looking back 4 years. The first thing is that pretty much everything I predicted came true in regards to Google:

… their corporate intranet is strewn with varying operating systems, with outdated versions of varying browsers. Ouch. Allowing access from the intranet out to the Internet is a recipe for disaster …

So yes, this is damned near how Google was compromised. However, there’s one very important thing, if I want to be completely honest, that I didn’t understand back then. I gave Google a 1:300 (against) odds on being hacked before ha.ckers.org would be. While I was right, in hindsight, I’d have to change my odds. I should have given it more like 1:30. The important part that I missed was the disclosure piece. Any rational person would assume that Google has had infections before (as has any large corporation that doesn’t retain tight controls over their environment). That’s nothing new - and not what I was talking about anyway. I was talking only about publicly known disclosures of data compromise.

So the part that I didn’t talk to, and the part that is the most interesting is that Google actually disclosed the hack. Now if we were to go back in time and you were to tell me that Google would get hacked into and then disclose that information voluntarily, I would have called BS. Now the cynics might say that Google had no choice - that too many people already knew, and it was either tell the world or have someone out you in a messy way. But that’s irrelevant. I still wouldn’t have predicted it.

So that brings me to the point of the post (as you can hopefully see, this is not a Google bashing post or an I told you so post). I went to Data Loss DB the other day and I noticed an interesting downward trend over the last two years. It could be due to a lot of things. Maybe people are losing their laptops less or maybe hackers have decided to slow down all that hacking they were doing. No, I suspect it’s because in the dawn of social networking and collective thinking, companies fear disclosure more than ever before. They don’t want to have a social uprising against them when people find out their information has been copied off. Since I have no data to back it up, I have a question for all the people who are involved in disclosing or recovering from security events. How many compromises of data security, that you are aware of, have been disclosed to the public as a percentage? You don’t have to post under your own name - I just want to get some idea of what other people are seeing.

If my intuition is correct, this points to the same or more breaches than ever before, but less and less public scrutiny and awareness of what happened to the public’s information. Perhaps this points to a lack of good whistle-blower laws against failing to disclose compromises (and monetary incentives for good Samaritans to do so). Or perhaps this points to a more scary reality where the bad-guys have all the compromised machines and data that they need for the moment. Either way, it’s a very interesting downward trend in the public stats that seems incongruent to what I hear when I talk to people. Is the industry really seeing less successful attacks than a few years ago?

The Effect of Snakeoil Security

Saturday, September 4th, 2010

15 posts left…

I’ve talked about this a few times over the years during various presentations but I wanted to document it here as well. It’s a concept that I’ve been wrestling with for 7+ years and I don’t think I’ve made any headway in convincing anyone, beyond a few head nods. Bad security isn’t just bad because it allows you to be exploited. It’s also a long term cost center. But more interestingly, even the most worthless security tools can be proven to “work” if you look at the numbers. Here’s how.

Let’s say hypothetically that you have only two banks in the entire world: banka.com and bankb.com. Let’s say Snakoil salesman goes up to banka.com and convinces banka.com to try their product. Banka.com is thinking that they are seeing increased fraud (as is the whole industry), and they’re willing to try anything for a few months. Worst case they can always get rid of it if it doesn’t do anything. So they implement Snakeoil into their site. The bad guy takes one look at the Snakeoil and shrugs. Is it worth bothering to figure out how banka.com security works and potentially having to modify their code? Nah, why not just focus on bankb.com double up the fraud, and continue doing the exact same thing they were doing before?

Suddenly banka.com is free of fraud. Snakeoil works, they find! They happily let the Snakeoil salesman use them as a use case. So our Snakeoil salesman goes across the street to bankb.com. Bankb.com has seen a two fold increase in fraud over the last few months (all of banka.com’s fraud plus their own), strangely and they’re desperate to do something about it. Snakeoil salesman is happy to show them how much banka.com has decreased their fraud just by buying their shoddy product. Bankb.com is desperate so they say fine and hand over the cash.

Suddenly the bad guy is presented with a problem. He’s got to find a way around this whole Snakeoil software or he’ll be out of business. So he invests a few hours, finds an easy way around it and voila. Back in business. So the bad guy again diversifies his fraud across both banks again. Banka.com sees an increase in fraud back to the old days, which can’t be correlated to anything having to do with the Snakeoil product. Bankb.com sees their fraud drop immediately after having installed the Snakeoil therefore proving that it works twice if you just look at the numbers.

Meanwhile what has happened? Are the users safer? No, and in fact, in some cases it may even make the users less safe (incidentally, we did manage finally stop AcuTrust as the company is completely gone now). Has this stopped the attacker? Only long enough to work around it. What’s the net effect? The two banks are now spending money on a product that does nothing but they are now convinced that it is saving them from huge amounts of fraud. They have the numbers to back it up - although the numbers are only half the story. Now there’s less money to spend on real security measures. Of course, if you look at it from either bank’s perspective the product did save them and they’ll vehemently disagree that the product doesn’t work, but it also created the problem that it solved in the case of bankb.com (double the fraud).

This goes back to the bear in the woods analogy that I personally hate. The story goes that you don’t have to run faster than the bear, you just have to run faster than the guy next to you. While that’s a funny story, that only works if there are two people and you only encounter one bear. In a true ecosystem you have many many people in the same business, and you have many attackers. If you leave your competitor(s) out to dry that may seem good for you in the short term, but in reality you’re feeding your attacker(s). Ultimately you are allowing the attacker ecosystem to thrive by not reducing the total amount of fraud globally. Yes, this means if you really care about fixing your own problem you have to help your competitors. Think about the bear analogy again. If you feed the guy next to you to the bear, now the bear is satiated. That’s great for a while, and you’re safe. But when the bear is hungry again, guess who he’s going after? You’re much better off working together to kill or scare off the bear in that analogy.

Of course if you’re a short-timer CSO who just wants to have a quick win, guess which option you’ll be going for? Jeremiah had a good insight about why better security is rarely implemented and/or sweeping security changes are rare inside big companies. CSOs are typically only around for a few years. They want to go in, make a big win, and get out before anything big breaks or they get hacked into. After a few years they can no longer blame their predecessor either. They have no incentive to make things right, or go for huge wins. Those wins come with too much risk, and they don’t want their name attached to a fiasco. No, they’re better off doing little to nothing, with a few minor wins that they can put on their resume. It’s a little disheartening, but you can probably tell which CSOs are which by how long they’ve stayed put and by the scale of what they’ve accomplished.

Detecting some forms of MITM attacks

Friday, August 20th, 2010

31 posts left…

There are quite a few different methods of performing MITM attacks, but one in particular kinda struck my fancy early on when I was thinking about airpwn. In the case of airpwn and similar exploits the attacker may be able to listen to the packets being transmitted but they aren’t able to block them, so instead it comes down to a game of beating packets to their source and origin. I don’t know what the prevalence of use of any sort of MITM is, but in this case there are a few things you could do to detect.

Anyway, if you receive double the DNS replies, or double ACK responses for instance, that could indicate that someone is trying to beat another packet back, which is why you’ll end up with two. Of course, figuring out which one is real isn’t straight forward (the bad guy may have just been slow, so it’s the first one that’s real). And there may be other things the bad guy can do like immediately forward a RST packet to the server you’re trying to connect to to quash the double ACK, so this may have some limits of utility.

Perhaps someone could think of another ingenious way to use that information or think of other clever methods of detection based on something similar for the other classes of MITM (like acting as a proxy, or re-routing traffic, etc…). I’m sure someone somewhere has already thought about and posted about this concept, but I wasn’t able to find anything in a cursory search. Maybe it’s new, maybe not, but I still thought it was interesting, even if limited.

The Chilling Effect

Friday, August 20th, 2010

As I wind down to 33 posts left until my 1000th and last post, I thought I should spend a little time talking more introspectively about how our community has changed over the years.

When I got started in security I had around the 130th hacker website on earth. We were all linked together with the second webring ever made (for those of you who recall webrings), which is how I know. Incidentally webring was made by a guy in his basement as a college experiment. Bronc Buster got in touch with the guy, which is why we were the second. It was called the Fringe of the Web. Back then sharing knowledge was hard to do. Search engines didn’t exist (DMOZ was really it). No one really trusted one another. No one really knew much because there weren’t many help files or docs being published back then either. I think a lot of people felt like there was a strong possibility they’d land themselves in jail if they were too outspoken about security. For you to get any better you had to do the research yourself because there weren’t many people around to help (at least in my case there weren’t). That was especially true for me because what I was interested in wasn’t being a good sys-admin or network guy and all the docs were about operating system security, firewalls and memory corruption. People were pretty unhelpful with a lot of RTFM, even though the manuals hadn’t been written yet. Installing Debian on my Gateway2000 with my crapola Mitsumi CD ROM for which there were no drivers yet written was my burden alone to figure out. Instead I was interested in this whole newfangled web thing - which almost no one knew anything about. Defacements were the norm - cybercrime was myth reserved for wild eyed paranoids and movies. Let’s call this the dark ages of computer security.

Later the industry dramatically expanded, and instead of there being just north of a hundred sites talking about security, suddenly you’re seeing security related articles and blogs on mainstream press. There are tens of thousands of sites talking about it. There is more new code and ideas being passed around than ever before. No one really feared jail time anymore, which was the only major consequence of publishing code that anyone could come up with. Enter script kiddies and sites devoted to helping people learn about computer security. Cybercrime was just taking off, and everyone realized that this was turning into a business. Companies start acquiring security and we get cool titles like CISO and CSO and we even have our own certifications. We finally had use cases and anecdotes for everything we had been talking about for all these years. Linux starts being sold on commercial desktops. It was the hay-day of computer security. Let’s call this the enlightenment.

In the dark ages of computer security no one released code because they feared jail. In the enlightenment everyone released vulns because they wanted to make a name for themselves and prove their skill. So where does that leave us today? Let’s take an example of a hypothetical young web application and browser security guy (think me but just starting out) with no background or history in the industry. We’ll call him “Todd.”

Let’s say Todd releases a browser vuln that is useful against a good chunk of browsers, but it’s an architectural flaw and one that won’t be fixed for many years to come because if it is fixed it’ll break other things. It’s not a desktop compromise type issue, it’s just allows attackers to harm most websites in some obscure way (think the next version of CSRF or XSS or Clickjacking or whatever). Todd, not knowing what to do or who to talk to releases the vuln to make a name for himself and to help close down the hole, because he thinks that’s the right thing to do. Here are some possibilities:

  • The Vendor is pissed at Todd for releasing the vuln and not telling them first - especially since there’s no fix. You evil vulnerability pimp you!
  • The press asks the simple question, “Why did you release this when you knew there was no fix?” to which Todd has no good answer except he thought he was doing the right thing by letting people know - and then the press mis-quotes him.
  • The blackhat community is pissed because they have been using something similar (or not) but either way they know this cool trick has a limited lifespan now thanks to Todd. More importantly they’ll try to hack Todd for releasing it. There will be much fist shaking and cursing of Todd’s name the day the vuln gets closed too.
  • The elite crowd are annoyed because they don’t think Todd should have gotten any publicity. The elite kernel level bug is way sexier (and it may very well be) and takes more skill (quite possible as well), but Todd knows nothing about the politics of the industry - he’s just interested in his stuff. They may try to hack and drop Todd’s docs to shut him up. There’s only so much limelight to go around, after all. Incidentally, I don’t think most guys who work on these types of vulns are like this, but it only takes a few to deter someone new like Todd.
  • There’s a slim chance someone might offer him a 9-5 job - as long as the vendor isn’t one of their clients.

Now let’s take the flip side - what if he wants to sell it:

  • The vendor won’t pay for an architectural bug - only full machine compromises please!
  • The blackhats won’t pay for it, because it doesn’t give them a shell.

So where does that leave Todd? It’s not in his best interest to release the vuln, because of the externalities of negative pressure, and no one is buying either. How does Todd make a name for himself? More importantly, how does he survive? Why on earth would Todd give up his vuln for free? He knows he could do some major damage with it, but the elite aren’t impressed so he doesn’t even get clout. Perhaps there’s a slim chance the vendor might hire him in gratitude? That’s a long shot and a waste of a great find for the chance at a 9-5 in the boiler room. Instead why wouldn’t Todd say screw it entirely and either stop doing the research and find something else to do or become bad and make some real cash? The chilling effect is in full swing. We are quite squarely headed towards another information security dark age. Sure there are a lot of good documents (if dated) on the web still. The bulk of advisories are from vendors these days, so you’ll still be up on yesterday’s news and patch management will be your life. Private conversations will always continue, but it won’t ever be like the enlightenment again unless something changes. I spoke with two large vendors about this and they acknowledged their part in it and that indeed they offered no good solution for someone like Todd who hadn’t already established himself - except the vague hope of some consulting arrangement.

I spoke with one guy who buys vulns and I asked him who his buyers were, out of curiosity. I was expecting him to say some large software retailers, but he said, “No, no, not at all. Most of my buyers are consulting companies.” I was confused. It turns out that there are a slew of consulting companies that will fail a pen-test with a client, but they can’t show the client that they found nothing, so they’ll whip out a ready-made 0day, impress the client and then they can go on the speaking circuit about their amazing find. Call me naive but it never even occurred to me that this industry could be that messed up. If you see someone speaking at a conference about some memory corruption flaw but they can’t seem to explain their own vuln the way you’d expect them to - you may have found one of these consultants.

I think this is important because as my tenure comes to a close in the blogging world, I feel like there are a lot of very talented people who will never get to see their day in the sun and as an unfortunate consequence of this vulnerability market some talentless people will. I know several people have completely packed up and decided to get out of the industry entirely because of how things are shaping up. I fear that the way things are headed it will be harder and harder for someone to rise to the top, without retribution from their peers. There is a whole new generation of people who are lining up to replace guys like me who are joining a very corrupt and preservationist industry. They may not have thick skin and may not survive what is in store for them. I’ve talked to over a dozen security folks who tell me the same story. These individuals worry about the security community’s reaction to anything these individuals say publicly more than they worry about actual bad guys committing crime. Is it too late to fix, or is it even worth fixing? Or would you argue that this is the best it’s ever been? I’d be curious to hear what people think.

Hill-Billies: A Case Study

Monday, August 16th, 2010

34 posts until the end… Oh, and happy Monday. It’s time for a little story.

Once upon a time there were some hill-billies living in the deep south. They had virtually nothing. They made their moonshine, and lived the most meager of lifestyles. They were in deep poverty. They made do with their hooch and stories. They worked hard - 8 hours per day at the local sweatshop, but they were happy enough. Then one day, an advocate for minimum wage increase saw what the hill-billies were living in and how they were living their lives. It made the advocate angry and they went to go fight the local sweatshop to increase their wages. The advocate wanted to make sweeping changes and would use the hill-billies as a case study on how much a little extra money can improve someone’s living standard to further the advocate’s cause.

Eventually, after intense scrutiny, the sweatshop realized that they had indeed been paying too little for any decent standard of living and decided to give all their minimum wage workers a rate increase, which included our friends the hill-billies. So now you’re thinking to yourself, the hill-billies got a home-loan or used the money to pay for school or something else productive, right? No… what happened was that the hill billies had always been happy with what they had, and the increase in money allowed them to stop working as much and make the same amount. They continued to make their moonshine and lived happily within their means…

The moral of the story is that about a year ago I reached an inflection point in my career of 15 years in security. I realized that with every major innovation the security community comes up with, the general public and vendors alike figure out a way to abuse that innovation or work around it to do what they originally wanted to do again (think firewalls and tunneling over port 80). It feels like we’ve been battling to protect people, but the people don’t want to be protected if it means changing. They’re happy with the status quo. Of course, there’s always fear of the unknown, and fear of insecurity is a key driver of spending (think anti-virus). One thing’s for sure though, you can’t change the nature of the hill-billies, so why are we trying? Our only path to success is empowering people to do what they want, without getting in the way. The words “No” and “Can’t” have to leave our vocabulary when it comes to what consumers and developers and companies want to do. Now, the trick is: how do we build security that no one notices is there?

Removing Entropy From PHP Session IDs

Thursday, August 12th, 2010

35 posts remaining…

Samy is awesome. If you missed his preso at Blackhat and DefCon, you missed out. You should try to get the DVD just to hear him. It’s hilarious. I’m not just saying that because he was using me as a fake case study or anything, it really was hilarious. Anyway, we got to talking and it occurred to me that it wasn’t super easy to automate his PHP session ID attack because it requires some social engineering to get the IP address of the user that you want to hijack. Well, after thinking I think I came up with a way around that in some cases.

There are a ton of sites these days that use load-balancers in front of them. There’s a few ways they can be installed - completely transparent or acting more like a proxy. The proxy is the more common setup but it has one pretty huge negative side-effect, all the IP addresses come to the server as just one - the internal IP of the load balancer. Normally that’s not a huge deal because the load-balancer does the logging or it sets some custom HTTP header that is properly logged. But PHP doesn’t know about any of that - it’s dumb. It’ll take whatever value it sees as the IP address and apply it to the session ID algorithm. So now instead of having to guess the entire IP space of the Internet, you now have to just guess RFC1918 - and probably realistically a much smaller slice of that in most cases.

Although that setup is pretty common, there is still one drawback. For Samy’s exploit to work you need to know when someone logged in (down to the second, preferably) to remove enough entropy to make it worthwhile to attack. So this still isn’t easily turned into an automated exploit, but we’re slowly but surely getting there.

Some Possible Insights into Geo-Economics of Security

Wednesday, July 21st, 2010

38 more posts left…

I first started thinking about this when I talked to a friend from Vietnam a year or so ago regarding his CISSP. Once upon a time it was nearly impossible to find someone in Vietnam with a CISSP. At first I thought he was making some sort of joke about the usefulness of the certificate, but for some things in Vietnam it’s really a hot commodity. It turns out that the cost of living there makes a CISSP almost totally not worth it. Even though it’s expensive in the United States (where I live) respective to the wages in Vietnam it’s weeks or even a month worth of work. Therefore the rate at which a certificate would be awarded is less, not because of skill, know-how or anything else. It’s purely economics. Slowly that has changed and more people now have it than before in Vietnam, but it’s still not equal as a percentage compared to the USA, for instance, from what I was told.

That got me thinking about other issues that are relatively the same. For instance SSL/TLS certificates. Buying a certificate to allow for transport security is a good idea if you’re worried about man in the middle attacks. Yes, that’s true even despite what I’m going to tell you in my Blackhat presentation where Josh Sokol and I will be discussing 24 different issues of varying severity with plugins and browsers in general. But when you’re in another country where the cost of running your website is a significant investment compared to the United States, suddenly the fees associated with the risks are totally lopsided. So this may be why you might see a lower adoption rate of certificates in certain regions. More importantly there really is no long term reason the security industry can’t create a free certificate authority (over DNSSEC for instance) that provides all the same security or more even without the costs - therefor making it a more equal playing field.

Lastly I started thinking about bug bounties and how they work almost opposite. Unlike security, where the cost is high for playing, hacking can be much more lucrative based on your geo-economic situation. For instance, a $3000 bug bounty for something that takes two weeks to work on equates to a $78k a year job if you can be consistent. In the United States for a skilled researcher that’s barely worth the time. But in a country where the average income is closer to $10k a year, something like this might highly incentivize researchers to focus on attack verses defense, which few can afford. Anyway, I thought it was an interesting concept that may play out entirely different in reality, but it was a fun thought exercise.

Fierce 2.0 To Be Released

Thursday, June 10th, 2010

A few years back I wrote a tool to do DNS enumeration. The point of it was that it is incredibly difficult to do an accurate penetration test against a target when you don’t know what to attack. The only way to know that is to find all the machines associated with that domain/customer or whatever. After a weekend or so of coding I came up with a functional, albeit crappy Perl program that did just that. A few people took note, a lot of people called me out for my crappy programming (rightfully) and ultimately it sat nearly stagnant for a few years. That is until I met Jabra.

Jabra (who works for Rapid7) is a bad ass Perl developer, at least compared to yours truly. He completely re-wrote Fierce, taking in my wish-list and a whole new set of features he wanted, like XML support to quickly integrate with nmap and all kinds of other stuff. Hopefully sometime next week we’ll have a released version. In the meantime please go and check out the beta of Fierce 2.0. Feedback is welcome!

Windows Help Centre Vuln

Thursday, June 10th, 2010

Updated: clarified some points of contention.

Early this morning Google’s Tavis Ormandy published a vulnerability in the hcp protocol handler. It allows the attacker to run arbitrary commands as the user. In practice it created a lot of alerts and warnings for me - but the XP install I was using is somewhat locked down. So I’m not sure how practical this attack would be over any other attack that causes an alert, as the article mentions. Later his reports says it works around the alerts (I couldn’t reproduce that, but that was his intention). Either way, though, this is some pretty amazing research. However, there are some odd things about this that really struck me the wrong way.

Google has been the loudest proponent for responsible disclosure in the past. But if you look at the dates in his post, he says he reported it to Microsoft on the 5th of June (a Saturday), who responded the same day. He sent the advisory early in the morning today the 10th of June - meaning Google gave Microsoft less than 5 days to fix it to respond to his demand to have it fixed in 60 days. Even Mozilla backed down from 10 day turn around, and they’re only running a single software suite. How is that possibly reasonable to expect a company like MS to turn around a patch in 4-5 days and then get so upset that then you must go full disclosure? (Incorrectly stated) And it’s not like Tavis was acting on his own - he credits other security researchers inside of Google for their help lcamtuf who works at Google. So apparently it’s okay for Google Google’s employees to go full disclosure, but not for other researchers. The hypocrisy is amazing.

See, here’s the big problem. Either you are all about full disclosure (which is happening less and less these days), you use it only when you know the company won’t react otherwise or has all kinds of other hinky things they do behind your back (the same reason I advocate full disclosure against Google), or you use responsible disclosure. Google says it adheres to responsible disclosure, but at the same time they give Microsoft 5 days to fix their 0day agree to a 60 day patch cycle for exploit code that Google’s researchers themselves created! From Google’s own website:

This process of notifying a vendor before publicly releasing information is an industry standard best practice known as responsible disclosure. Responsible disclosure is important to the ecology of the Internet. It allows companies like Google to better protect our users by fixing vulnerabilities and resolving security concerns before they are brought to the attention of the bad guys. We strongly encourage anyone who is interested in researching and reporting security issues to observe the simple courtesies and protocols of responsible disclosure. Our Security team follows the same procedure when we discover and report security vulnerabilities to other companies.

… except when you don’t. Then Tavis puts a patch up on a domain that, no offense to Tavis, is more sketchy sounding than a lot of malware sites out there (http://lock.cmpxchg8b.com). Do you really expect a billion XP users to download and run that? (Non sequitur) There is evidence that it doesn’t even work in some cases, but it does appear to work against the one PoC Tavis put up in the test I ran. I don’t know, the whole thing just rubbed me the wrong way. But at least now no one has to pretend to do responsible disclosure with Google just because it’s the right thing to do - they don’t use it themselves. Even when MS finds a vuln in Google they do so responsibly. I don’t mean to say anything bad about Tavis, because he’s probably a good guy, with a lot of skill. But let’s stop pretending Google’s team is chivalrous, shall we? Let’s see what Google does when one of their own breaks their stated policies, whether the researcher is working in their own time or not.