I’ve spent a long time in the trenches and recently I’ve been getting more and more jaded - if that’s even possible. I’m sure at least once a week someone in the office hears me utter the nearly completely useless comment, “everything’s broken anyway, who cares?” Now I think it’s time I actually explain myself. In the last decade and a half that I’ve been in interested in webappsec I’ve had the opportunity to talk to nearly every self proclaimed expert in the industry and more and more I’ve been able to get them to say or admit that “everything is broken.” I think what they mean is that if an attacker takes any system and apply enough resources against it they will get into it, break it, take it off line or whatever it is they want to do.
I’ve talked to a number of people regarding the percentages of sites they are able to break into or find exploits in. A few years ago we were all collectively hovering around 70-80% (Jer has some good stats on this) - but we were only talking about that in context of certain classes of webappsec bugs. Could the number be higher? And I don’t mean higher by a few percentage points - I mean approaching 100%? Let’s assume for a moment that there is one or more 0day remote vulns in each of the major web servers out there that we haven’t uncovered - they happen fairly regularly so let’s just take it on faith that there is at least one or more left. Then let’s assume a large number of the remaining sites host insecure applications on top of them (we’re finding that number to be well into the 90% range for anything at all dynamic). Then let’s assume a large percentage of the very small remainder have insecure network configurations (we find that number alone to be around 70%). Then let’s assume the server providers, or administration paths are insecure to physical wire tapping, or direct exploitation against the underlying DSL modems/routers of the people who administer the site. Then let’s talk about DNS, or router/firewall exploits, ASN.1 and so on. Then let’s talk about man in the middle exploits, browser exploits, mail exploits, Instant Messaging exploits, exploits against mobile phones and so on and so on… And let’s not forget social engineering. None of which are covered by that original 80% that I think we were all talking about a few years ago.
Remember, before we were at 80% and that was bad enough. In fact, you may all remember the Joel Snyder comment that there is no way anyone could exploit 70% of sites. I think he and others like him felt that 70% was apocalyptic and Acunetix was simply smearing marketing FUD. But what if the number was really worse? And I mean a lot worse. What would people say? What would people think? Would they stop consuming? No - which is why I don’t think talking about it is FUD, or at least not particularly effective at getting consumers to understand reality. But more importantly, who cares? If it’s all broken anyway, why do we keep releasing patches for things that are residing on top of a critically broken infrastructure while there are far more new products, features and services appearing on a daily basis - each with their own holes?
Consumers will keep consuming, companies will keep patching, hackers will keep hacking - nothing will change because of this post or any great realization of how broken things really are. Does that mean I’m throwing up my hands and giving up? Of course not, it’s my livelihood. But it does mean that I’m not that interested in new exploits, as they are just another way to exploit something. That may be interesting to an outsider who isn’t properly initiated, but I think if you spend enough time talking to experts you too may come to the same realization I did. And that is not to spread an apocalyptic view of the Internet, given that I know consumerism will win over any security flaws.
Many of the CISOs I talk to mention esoteric bugs as their top concern and I have to stop them and explain how unlikely it is that they’ll be hit by that specific kind of exploit, but rather how incredibly likely it is they’ll be hit by something mundane that’s been out there for years. It’s less sexy to talk about it, but we still haven’t found good solutions to problems we’ve known about for 10+ years. As a simple example - why are we still using IPv4, dns, telnet, FTP and HTTP when we have IPv6, dnssec, ssh, scp and HTTPS? Again - I don’t want to sell FUD, I actually just want to stop talking about percentages. The truth is, if you have something interactive connected to the Internet, it’s probably exploitable in some way, and really, it’s not that terrible of a thought considering it’s pretty much always been that way. If you want my advice, take a cue from the military and air gap anything you don’t want broken into. And with that downer, I hope you’re having a good weekend.