Paid Advertising
web application security lab

Cross Site Scripting Warhol Worm

Several years ago I read a paper called Owning the Internet in your spare time. Besides being the single best security paper I’ve ever read coming out of a college, it opens the door to a new classification of viral propogation in the security community. The basic premise is this. Traditional viruses travel in a very innefficient manner. They scan a series of hosts either nearby their netblock or just start at a single point in the entire IP space and start scanning in one direction. Then when they find a vulnerable host they infect it and start scanning in the same place all over again. As I said, super innefficient. The concept of a Warhol worm is what Andy Warhol was famous for - “15 minutes of fame“. A virus that could propogate in 15 minutes globally.

Now in spite of the great premise of the paper above, it still lacks some reality (in talking with some viral genetic researchers). There are two things that make this paper infeasable. The first is that it requires users to have their computers on. Typically that is a follow the sun model. The fastest you can get a worm to travel is slightly less than the time it takes for every computer on the planet to turn on and be infected (approximately 24 hours). The other problem is network traffic. If you have every machine in the world probing for computers, it can take down huge sections of the network, so you have to have some mitigating factors to make sure only high bandwidth hosts are capable of scanning large chunks of the network and stay relatively geographically close to their origin until the next time zone is awake. The first example of a Warhol worm (or Flash worm) was the SQL Slammer worm which used a psuedo-random number generator for propagation.

So assuming you could figure these issues out (they aren’t that difficult - but I’ll leave it as an academic excersize) how does this affect cross site scripting (XSS)? Let’s take a look at the MySpace Samy worm for a second. That affected 1MM users, in a fairly non diverse location (mostly users in the United States). 1MM users is a LOT of infected machines, but still not enough. Let’s take it one step further. Let’s pretend for a second that there are users who have access to multiple websites that are similar to MySpace (it stands to reason that if a user is accessing MySpace they probably have other accounts elsewhere as well). Finding vulnerabilities in multiple platforms should be relatively easy (it has been historically anyway).

Now let’s say instead of simply just attacking MySpace, the worm also attacks or another similar social networking site with another significant amount of users. Suddenly you have an XSS worm that can jump from platform to platform. Now let’s take it one step further, and say you find multiple vulnerabilities in social networking platforms located in every time zone around the world. If you tie them together you now have a social networking XSS worm that can leap from platform to platform and infect huge chunks of the global population. Now, let’s take it still one step further and say that we can embed certain exploits in known open source applications like PHP nuke, etc… Scanning the local IP space, using a search engine with the keywords that match a likely candidate for exploit then connecting the browser to it and attempting to exploit the vulnerabilities could make a worm that could theoretically attack nearly every computer on the internet that was used by an internet facing user.

Instead of affecting 1MM users it could be 1 billion users, and it wouldn’t have to have much genetic diversity to do that, because it would only have to survive for one day. The ramifications of a worm like that propagating across the internet could be disasterous. The payload could be something as easy as a DDoS, or the largest phishing platform mankind has ever seen, or even as stupid as just flooding the global network for a day (anyone need a vacation day?). Critical infrastructures could not handle and additional billions of requests a day (and I doubt the search engines themselves could handle the billions of additional searches being performed), which could easily flood off tons of networks, particuarly the smaller ones, even with no payload. The cost to businesses could be in the billions.

It might not be 15 minutes of fame, but 24 hours of infamy is probably just as scary. I’m really trying to hold back on my fear-mongering, but this isn’t fiction - it just hasn’t been built (yet).

5 Responses to “Cross Site Scripting Warhol Worm”

  1. RSnake Says:

    Interesting and relevant link:

  2. web application security lab - Archive » JavaScript Malware Talk at Blackhat is a Success Says:

    […] Think about our cross site scripting worm again for a second. If I could get even a small percentage of the SAMY worm to expose their machine to the world (say 10%) we are still talking about 100,000 new machines that are completely exposed to the world. If you take a bigger worm, like a cross site scripting warhol worm, the potential for global compromise is tremendous. It would be virtually a free reign. This could be the largest attack vector the world has ever seen, not just to run some JavaScript on a machine, but actually hack millions of users’ home networks. […]

  3. web application security lab - Archive » DefCon Wrapup Says:

    […] The next day I ended up meeting Andrew van der Stock and Dinis Cruz. Dinis and I ended up talking for the better part of the day about genetic algorithms and how an XSS warhol worm would propagate and how command and controll would work. Extremely interesting conversation. I’ll probably write something about this in the not too distant future. We also discussed ways to do better XSS fuzzing against browsers, and the future of web application firewalls. All super interesting and needs further research. I only saw a few talks, because I ended up talking to all the webappsec folks most of the day. […]

  4. web application security lab - Archive » DoSing Search Engines Says:

    […] When Dinis Cruz and I were talking about how viral propogation would work for an XSS Warhol worm, one of the things we discussed is a centralized command and control element. One of the main problems with an XSS worm is that it needs to propogate itself blindly or it needs a central point to control all of the infected machines. The single point of control is the tough part. Eventually I think we settled on stealth over virulance as a concept, but let’s talk about that some other time. […]

  5. Reliable Browser Detection « maluc’s scripting lab Says:

    […] Now, for determining what percentage of readers use each browser to read this blog,  it’s not really a big deal if 1 in 50 is miscounted. But when the time comes for the next XSS Warhol Worm .. if it needs different browsers to run a different script, a couple bad detections early on can drastically slow its propogation speed.  Reliable browser detection is worth incorporating for these cases.. Ideally, finding something that is not spoofable ahead of time will suffice.  Just about every detection method can be fooled using a GreaseMonkey script or an intelligent proxy to return false data to the worm, but only after they know what the worm’s source is. Since the goal is just to have it execute once, it doesn’t matter if they can trick subsequent executions. […]