Cenzic 232 Patent
Paid Advertising
web application security lab

Symbiotic Vs Parasitic Computing

Whelp, I just concluded my rather long trip to Boston for the first annual SourceBoston conference. I actually had a great time. There were a few bugs in the conference here and there, but that’s not unexpected for any first-time conference (totally forgivable). I expect it to grow quite a bit in the future to given the quality talks I heard there. The talks were almost without exception technical or business oriented enough that I had a difficult time picking and choosing which ones I wanted to go to. So much so that this was the very first time I’ve ended up buying a talk after the conference was over. I’m glad I did too. The telephone defenses talk by James Atkinson was excellent - long (2.25 hours) but excellent.

The keynotes were all great, but the one that really stood out in my mind was the talk by Dan Geer. It was a fast speech with a lot of specifics about the correlation between genetics/biology and viral/worm propagation. Later that evening a bunch of the more technical/business people were asked to go to a dinner with Polaris Ventures. There I got a chance to speak with Dan a little more and gently chastised him for not mentioning Samy, which I think intrigued him.

After the dinner, I began thinking about one concept that was mentioned during Dan’s talk and was echoed a number of times during the dinner and then again the next day. The concept was around the fact that in some networks we are seeing a change between parasitic malicious software to symbiotic malicious software. That is, some of the best maintained machines are run by attackers. They are up to date, patched, the logs aren’t writing off the end of the disc, etc… Because bad guys take a great deal of pride and care in their machines, trying to keep other hackers off the boxes. They end up doing quite a bit of defense to insure they have the ability to maintain and control the box.

Okay, here’s where it gets really weird. Now let’s take that to it’s next logical bizarre conclusion - often machines are healthier with an attacker on it than without. In fact, bad guys really do their best to make sure they aren’t saturating the connection or making the machine too slow. It’s in their best interest to have their control over the machine persist. It’s a strange malicious harmony, where the host is fed and nourished by the aggressor organism. But that only really makes sense when an attacker has access to the system and can control the inner workings of the computer itself. Are there any parallels between this and web application security?

I’ve thought about this quite a bit over the last few days and I can’t come up with a single real world instance of where a malicious attack that doesn’t involve some level of OS compromise has been healthy for the host system in any way. Once you begin talking about remote file includes and command injection it’s the same thing as a typical worm or virus scenario, where the attacker may take a great deal of time and energy to protect the system, leaving only small back doors for themselves to recover control at will. But all the compromises that don’t allow for any form of system/OS level control have only damaging effects. XSS worms, CSRF, SQL injection, etc… None of which have any positive effects on the host system.

But each of those classes of vulnerabilities also share one other thing - with the exception of persistent XSS/HTML injection or changes to the database they each have very little longevity compared to OS compromises. Adding an account in a database clearly can persist indefinitely, but more often than not a bad guy is less interested in adding to a database than they are extracting information out of it, meaning the longevity is an afterthought. Persistent XSS that performs overtly malicious actions is typically uncovered quickly because of the real-time effects it typically has against the normal user base.

With that in mind, it would appear to be that the less persistent the attack the worse it is for the host system in terms of any tangible side benefits the attacker can bring to the table. There is little to no symbiosis in these forms of web application attacks. Now clearly you’d say it’s better to have an XSS attack than a computer compromise, and I’m not naive enough to think bad guys really care about the system. Rather their interests lie in maintaining control over their assets, which is an issue of economic loss aversion. But I definitely think there is some interesting side effects of malicious attacks here that are worth thinking through a little more.

Further, one rather interesting argument I’ve heard for protecting machines is for increased OS longevity, which costs enterprises less because of productivity loss when machine re-installs are required. This really is more of a problem when rampant spyware begins to slow down a machine to where it becomes almost unusable, or even unstable because of the poor quality of the spyware code requiring the owner to complain to their IT department. In that case it’s parasitic software. So perhaps what we are really interested in is stopping parasitic software and encouraging symbiotic software to lower IT costs. As long as the machine contains nothing sensitive it would end up driving down support costs, if there were no other negative effects, like having your IP space black holed for sending out too much spam. It sounds crazy, I know.

Jeremiah and I were talking about this with Dan from Polaris yesterday though. If you could somehow get someone to use XSS as a way to do compute type for processor intensive operations you could potentially sell that to people who have high levels of computational needs. I like that idea less than offering a free download to users for the processor or slice of bandwidth when not in use (symbiotic), in trade for hardening the machine against purely malicious (parasitic) attackers. We’ve seen things like this in the past, SETI at home, and the RC5 distributed cracking… but I consider both of those to be parasitic, offering little to nothing to the host machine. Interesting thought, anyway…

11 Responses to “Symbiotic Vs Parasitic Computing”

  1. DoctorDan Says:

    Very interesting concept. It looks completely crazy on paper, but I can definitely see what you’re saying. Something to think about…

    I find it very funny that another Dan worked on worm propagation with a bio-esque view. The logistic model of tracking worms (which I wrote a paper on) follows in closely with r- and k-selected species and carrying capacity. Actually, the ideal carrying capacity graph in any bio textbook follows the same model as the one I wrote about (http://docs.google.com/View?docID=dfsfg7gj_16hgjbb9dn). Where can I read more about what Dan Geer talked about. How interesting, to draw parallels between the natural-world sciences and computer sciences.

    -Dan

  2. Mike Says:

    Wouldn’t that be the same as outsourcing parts of your IT department? If your IT department isn’t doing its job, then you should get a new one.

    That said, if anyone wants me to admin their machine remotely and in return allow me to have unlimited access, I’d be willing to do it.

  3. Tim Says:

    DoctorDan: The transcript of his speech is here:
    http://geer.tinho.net/geer.sourceboston.txt

    it was a great speech. The part I found most interesting is that he argued _against_ OS longevity — that we need to be more prepared to give up on infected machines and “nuke them from orbit”. This would mean having our settings and data backed up and ready for a new install.

  4. malkav Says:

    well, i can’t say i agree, but it may be pedantic consideration.

    in biology, symbiosis is profitable to both organism, and harmful to none (parasitism *may* be profitable to both, but is harmful to the host)

    but this definition isolates completely isolate the symbiots from their environment. alone, a single symbiot (host computer with viral caretaker) may not be a great threat to its environnement, because its capacity, even for a large server, is quite negligible regarding the total capacity of his environnement.

    now, the symbiot is effectively fonctionning more efficiently than its uncontaminated counterparts. so of course, local administrators (be it an end user, or a full scale sysop team) is off course *less* likely to look for a problem on a box that is apparently working better (assuming there is no monitoring able to detect the presence of the symbiot).

    this make a remarkably silent infection, which would spread very quickly. and assuming the rogue software is well built, well maintained, the detection threshold would be quite high, permitting the symbiot to follow their agenda for quite a long time.

    but precisely, they *are* following their own agenda. be it sending spam, DDoSing random sites, conducting phishing or whatever. this activity, over a population threshold, will be harmful, maybe not to the individual host themselves, but to their environment, because the additional rogue activity is consuming an ever larger share of finite resource (be it computational power, network bandwidth…)

    i think a good analogy would be a microorganism reinforcing it’s host, making it healthier than ever, but accelerating his metabolism, forcing him to consume more resources. as long as you have a single “supercow” in the herd, it will not be a big problem (and pride of the peasant. but once your whole herd is made of supercows, consuming every single patch of grass, what will he do ?

  5. Awesome AnDrEw Says:

    I agree with DoctorDan. This is an interesting concept to ponder, but as you point out neither XSS nor compromised operating systems are “healthy”, and the only reason such boxes are well maintained after being taken over is because the computer becomes an asset to the attacker.

  6. Awesome AnDrEw Says:

    @Tim
    But don’t most companies with even a mediocre IT team understand this concept already (O.S. longevity)? I spoke with a technician at a large engineering firm, and was told that they don’t dedicate too much time to infected systems, and if it takes them longer than 20 minutes to clean the O.S. of any malware they simply wipe the drive and install an image they’ve created of their optimized default settings.

  7. tester Says:

    I’m leery of taking the bio-virus/computer virus analogy too far. Analogy is not reality. But I know there are many similarities. I am not a virus expert, BTW.

    So where is the boundary, where does biological virus behavior and computer virus behavior diverge and why?

  8. Jake Reynolds Says:

    Another interesting bio-analogy is endosymbios. That is, parasites with parallel interests to their hosts will tend to become indistinguishable from them. E.G. endosymbiotic theory of the origin of mitochondria, chloroplasts and possibly some other eukaryotic cell organelles.

    For organisms, this generally means that both the host and the parasite are propogated via the hosts’ gametes (sex cells). For software, this means that the software fulfills user requests in a timely manner, commensurate with original intent. From there the software will be used more and developed further (mutated). So long-term exploitative interests (OS compromises) tend to align with the original intent of the host processes.

    However, if the parasite is propagated differently than the host is you get much nastier results. For instance, the lancet fluke’s eggs are excreted in the excrement of livestock. Snails eat the eggs and the larvae develop inside of them. The snails excrete the larvae in mucous balls, which are eaten by ants. The larvae moves into the ants’ heads and reprogram the ants’ brains to repeatedly and conspicuously climb blades of grass and attach themselves. Livestock eat the ant and the lancet fluke larvae lives its life inside the intestines of the livestock. So the snail catches a cold and the ant dies because the lancet fluke’s interests deviate so far from those of the intermediary hosts’ genes. I see plenty of correlations here with short-term exploitative interests such as attacks aimed at exploiting web applications. Web application attacks just need the ant long enough to make it get eaten. OS-compromises need to keep the host alive and healthy enough to reach longer-term goals.

  9. Jabra Says:

    Here is the text of the talk for those who didn’t attend.

    http://geer.tinho.net/geer.sourceboston.txt

  10. donwalrus Says:

    To your earlier points in this post, I couldn’t agree more. I have been spending the last couple of days assisting a client with removing literally DOZENS of bots, trojans and rootkits on their network, due to an incompetent vendor installing a new firewall for them with a “permit any” inbound ruleset…

    These systems (although compromised and part of a pirated-software/porn sharing network) were some of the most difficult to access servers I have encountered in months, in terms of exploiting vulns and gaining root shells.

    It was also extremely difficult to run certain kernel-level processes due in part to the protective measures of the various bots and kits….of particular interest (in the Winders world) is that once exploited, several of these bots disabled the default admin shares, such as c$ and admin$, which can actually be a plus for security management….its amazing how much more vulnerable non-compromised hosts can be sometimes :0

  11. Max Says:

    This is not a new thing, it was common in hacks during the 90s, where compromises were aimed at getting access to a scarce resource (bandwidth, internet connectivity), to actually find that the perpetrators patched the vulns they exploited and even fixed misconfigurations in the system.

    An obvious issue with this scheme is that you don’t really know what is driving the people owning your boxes.