Cenzic 232 Patent
Paid Advertising
web application security lab

Apocalyptic Vulnerability Percentages - FUD 101

I’ve spent a long time in the trenches and recently I’ve been getting more and more jaded - if that’s even possible. I’m sure at least once a week someone in the office hears me utter the nearly completely useless comment, “everything’s broken anyway, who cares?” Now I think it’s time I actually explain myself. In the last decade and a half that I’ve been in interested in webappsec I’ve had the opportunity to talk to nearly every self proclaimed expert in the industry and more and more I’ve been able to get them to say or admit that “everything is broken.” I think what they mean is that if an attacker takes any system and apply enough resources against it they will get into it, break it, take it off line or whatever it is they want to do.

I’ve talked to a number of people regarding the percentages of sites they are able to break into or find exploits in. A few years ago we were all collectively hovering around 70-80% (Jer has some good stats on this) - but we were only talking about that in context of certain classes of webappsec bugs. Could the number be higher? And I don’t mean higher by a few percentage points - I mean approaching 100%? Let’s assume for a moment that there is one or more 0day remote vulns in each of the major web servers out there that we haven’t uncovered - they happen fairly regularly so let’s just take it on faith that there is at least one or more left. Then let’s assume a large number of the remaining sites host insecure applications on top of them (we’re finding that number to be well into the 90% range for anything at all dynamic). Then let’s assume a large percentage of the very small remainder have insecure network configurations (we find that number alone to be around 70%). Then let’s assume the server providers, or administration paths are insecure to physical wire tapping, or direct exploitation against the underlying DSL modems/routers of the people who administer the site. Then let’s talk about DNS, or router/firewall exploits, ASN.1 and so on. Then let’s talk about man in the middle exploits, browser exploits, mail exploits, Instant Messaging exploits, exploits against mobile phones and so on and so on… And let’s not forget social engineering. None of which are covered by that original 80% that I think we were all talking about a few years ago.

Remember, before we were at 80% and that was bad enough. In fact, you may all remember the Joel Snyder comment that there is no way anyone could exploit 70% of sites. I think he and others like him felt that 70% was apocalyptic and Acunetix was simply smearing marketing FUD. But what if the number was really worse? And I mean a lot worse. What would people say? What would people think? Would they stop consuming? No - which is why I don’t think talking about it is FUD, or at least not particularly effective at getting consumers to understand reality. But more importantly, who cares? If it’s all broken anyway, why do we keep releasing patches for things that are residing on top of a critically broken infrastructure while there are far more new products, features and services appearing on a daily basis - each with their own holes?

Consumers will keep consuming, companies will keep patching, hackers will keep hacking - nothing will change because of this post or any great realization of how broken things really are. Does that mean I’m throwing up my hands and giving up? Of course not, it’s my livelihood. But it does mean that I’m not that interested in new exploits, as they are just another way to exploit something. That may be interesting to an outsider who isn’t properly initiated, but I think if you spend enough time talking to experts you too may come to the same realization I did. And that is not to spread an apocalyptic view of the Internet, given that I know consumerism will win over any security flaws.

Many of the CISOs I talk to mention esoteric bugs as their top concern and I have to stop them and explain how unlikely it is that they’ll be hit by that specific kind of exploit, but rather how incredibly likely it is they’ll be hit by something mundane that’s been out there for years. It’s less sexy to talk about it, but we still haven’t found good solutions to problems we’ve known about for 10+ years. As a simple example - why are we still using IPv4, dns, telnet, FTP and HTTP when we have IPv6, dnssec, ssh, scp and HTTPS? Again - I don’t want to sell FUD, I actually just want to stop talking about percentages. The truth is, if you have something interactive connected to the Internet, it’s probably exploitable in some way, and really, it’s not that terrible of a thought considering it’s pretty much always been that way. If you want my advice, take a cue from the military and air gap anything you don’t want broken into. And with that downer, I hope you’re having a good weekend.

19 Responses to “Apocalyptic Vulnerability Percentages - FUD 101”

  1. Jason Dean Says:

    You bring up something that I have been wondering about for a while. You mentioned, why are we using HTTP?

    My question is, is there any reason not to use SSL on an entire website? Are there performance concerns? Is there any reason that I should not forward my http:// request to https:// if I have a certificate?

  2. mp Says:

    @Jason
    Encrypting something requires a performance hit.

    Browser compatibility isn’t really an issue anymore.

    Excellent post by the way. I’ve tried to explain the everything is broken concept to clients before little success.

  3. anon Says:

    performance concerns and/or hardware costs prevent this

  4. ChrisP Says:

    Sure HTTPS imposes a performance penalty particularly during the initial connection setup (handshake) which uses public-key cryptography. After that it’s symmetric key encryption which is much cheaper computationally. However there are many powerful hardware-based SSL proxies available on the market that can help offload the (en|de)cryption tasks from the server.

    But how is HTTPS helping in any way? If you have a XSS hole, HTTPS isn’t going to help. The same goes for SQL injection. Your site displays verbose error messages with code snippets when it “crashes”? HTTPS doesn’t help either. Got a CSRF? HTTPS is helpless again. The only benefit it provides is to make your data immune to man-in-the-middle tampering.

    If you consider (WhiteHat stats) that information leakage and XSS take 94% of the threat pie, how is HTTPS relevant?

  5. rvdh Says:

    IPv6 isn’t that secure also btw, at least the insecure inplementations in operating systems.

    It’s about time developers start to follow the little standards we have, RFC’s, follow experts on certain fields, and general common sense. I think that percentage will decrease when developers start doing that.

    Ultimately, yes, nothing is safe. A compromise in my eyes is only the difference between the attackers concentration and his ammount of time, versus the level of withstanding fire-material your security is based upon.

  6. RSnake Says:

    @ChrisP - if you read the rest of my post, you can see that I covered those concerns. Those comments were _only_ examples of how many things have been solved forever, but still aren’t completely implemented globally.

  7. mckt Says:

    @ChrisP: HTTPS may not do much against XSS, SQLi, etc, but it prevents most random sniffing, session stealing, and MITM attacks.

    I know I’m not the only one who fires up Kismet every time he joins an open network.

    @RSnake: This was basically the point of my post last week downplaying the importance of the Clickjacking exploits- everything is already broken… so what? But it’s not all a downer- the positive thing is once we realize this, we can start building systems with the implicit assumption that they are going to fail.

    I think the important thing isn’t necessarily protecting the data- it is knowing when it got leaked and what happened to it. Of course, we should put the necessary effort into securing an app, but very few web apps that I’ve seen even have an audit log to track logins, admin actions, etc, and those that have it never get reviewed. The leaks that nobody finds out about scare me a lot more than the highly publicized ones.

  8. Zach Says:

    Of course, even with an air gap, a really determined attacker could always break into your house…

  9. RSnake Says:

    @Zach - You’re proving my point exactly.

  10. Jason Says:

    HTTPS can also be more difficult to analyze for exploitation attempts if you aren’t able to sniff it when it is decrypted.

    It’s more about risk management and risk acceptance than “everything is broken.”

    It really depends on what you mean by “broken” and if the cost of preventing an exploit is worth the value of the compromised asset.

  11. MAdhaTTer-240 Says:

    Has this not always been the case? I may not be as old as you ;), but even back when I first started getting into Security some ~8 years ago, the first thing I learned was anything can be broken when someone cares enough to make it happen. I expect that that will never change.

    It has always been about raising the bar. I will rest happy, even if some uber-code expert who holds a grudge against me for some unknown reason, decides to scour the source code for some daemon I run finding and exploiting a Buffer Overflow. After all, that is what chroot is for ;)

    You know why I will rest happy? because it took some of the best to take something of mine down.

    Now having your VPN Concentrator and Cisco Routers HTTP(s)/SSH posts open to the internet.. as some supermarket stores could *cough* do* does not reflect on me. However it does pose a great business opportunity ;)

    Some times a supermarket needs to be breached twice to really feel the burn…

    I suspect that things will have to get very bad before they get very good. Who knows maybe those internet based information wars will kick it off…

  12. http://www.eradicatespyware.net Says:

    Above article discusses good technical stuff
    regarding dns, router and fire wall secuirty & ofcourse browser security too deal
    nicely..
    today largely dns Server in the world
    are prone to malware of cyber attacks or gets infected,
    even browser too are prone to “latest clickjacking”

    regards..

  13. LX Says:

    Well, i don’t agree about “everything is broken”. Nowadays a vulnerable program with /GS compiled and SafeSEH it would be secure enough to avoid command execution. That means buffer overflows are dying (well almost dead) and they were the big kahuna. Then it would come the time of heap overflows, then that of Web bugs (more and more Filtering and blocking options, etc). Little by little every class of vulnerabilities is closing. Where are the new ones? Nowhere. The problem is that hackers,researchers,etc cannot find new classes of attacks/vulnerabilities. New methods of hack. The latest was XSS. Everyone is searching the same paths…

  14. Peanut Says:

    Look at the world though, is anything really safe there?
    have banks never been robbed?
    have high security safes never been cracked?

    the world isnt safe, so a mish-mash of networks we call the internet running many different softwares with the annonomity of the internet isnt hardly going to be safe either.

  15. LonerVamp Says:

    “…consumerism will win over any security flaws.”

    That’s true! Not only will consumerism win, but so too will business requirements. If the business wants something done, they’ll get it done security issues or not!

    For us geeks, the concept that everything is broken is not hard to grasp. In security, that is a healthy, basic assumption! But business rarely accepts that sort of stance. It is hard enough to explain risks and probabilities and some score on how secure you are or not.

    Nice post! Yes, it sounds doom and gloom, but that’s our therapy, you know? As security-minded geeks, this is what stresses us and this is how we release it. We all have to regularly revisit the zen well of inevitable insecurity!

  16. Jay Says:

    Security is a like toaster. (stick with me here)
    Seriously, think about it. Will a toaster ever break? Yup, it sure will. What makes a good toaster? That it can take a beating long enough to make it worth the price a person pays for it.
    Companies test toasters and put them through stress tests to determine their quality and to find any glaring defects. If defects are found the costs to fix are weighed against the impact to sales.
    Anyone who codes or sets up security to avoid the *possibility* of a breach is most likely wasting time, money and effort. Focus on the most probable and beef up around it, increase reaction time, time-to-recover, reduce the impact. Avoiding the breach 100% is useless.

  17. Mark Says:

    Great post! Brings up the practical side of security that consultants/vendors/auditors don’t ever talk about.

    You ask us to assume that there is and will always be close to 100% chance in breaking widget X, which I agree with, but you also state that you’re not interested in new exploits, which I disagree with.

    In a consumerist and capitalist culture where competitive advantage, perception and reputation is everything, ensuring that you spend *just* enough effort to protect against the majority of exploits is the key. And this is what we should be advising people of - not the delusion of 100% security.

    Metaphorically speaking, if the bear is chasing all of us, I only really need to be better than one other person.

  18. Anon Says:

    This realisation is something I came to about 3 years ago. As much as I love security its probably why I have second thoughts about my profession all the time.

  19. NM Says:

    Great post, and as an intermediate user concerned with security - not even as an expert like yourself - I long ago realized the futility of constant security maintenance… I run a firewall, keep my OS patched, I limit open ports, I don’t open any mail that my aunt sends with “Fw: fwd: fwd:” in the subject line, and I try to be vigilant about what programs I install/run… but despite my security vigilance, I no longer worry about maintenance. I don’t, for instance, run anti-virus, as my anti-malware effectively subsumes it. I don’t stick to paid-for software, as open source is almost always just as safe, if not moreso than certain companies’ attitudes, “Patch Tuesday” being perhaps the biggest joke conceived.

    Eventually, any system will be privy to one attack: if one does not work, try another exploit, be it remote, native, application- or kernel-layer…. the trick is to minimize risk, not eliminate it. And in that prevention lies reality, as well as your continued job security.

    Interesting blog.