Paid Advertising
web application security lab

Archive for the 'SEO/SEM' Category

CSRF Isn’t A Big Deal - Duh!

Wednesday, April 14th, 2010

Did you hear the news? CSRF isn’t a big deal. I just got the memo too! There were a few posts pointing me to an article on the fact that CSRF isn’t that big of a deal. Fear not, I am here to lay the smack down on this foolishness. To be fair, I have no idea who this guy is, and maybe he’s great at other forms of hacking - web applications just don’t happen to be his strong point. Let’s dissect the argument, just to be clear:

Even with some of the best commercial Web vulnerability scanners, it’s very rare that I find cross-site request forgery (CSRF). That doesn’t mean it’s not there. Given the complexity of CSRF, it’s actually pretty difficult to find.

Huh? It’s difficult to find with a scanner so therefore it’s difficult to find period? Noooo… almost every single form on the internet is vulnerable to it unless it’s using a nonce. Just because scanners have a tough time dealing with it doesn’t mean it’s hard for a human to find. If you set down your scanner and do a manual pentest once in a while you’ll find that nearly every site is vulnerable to it in multiple places (.NET with encrypted ViewStates are the only sites that natively don’t have this problem regularly).

The good news is it’s even more difficult to exploit CSRF which essentially takes advantage of the trust a Web application has for a user.

What the?! Difficult to exploit? If writing HTML and/or JavaScript is difficult, sure. However, if you have even the slightest idea of how to create a form and a one liner JavaScript to submit it, or even worse a single image tag in a lot of cases, it’s not difficult. It’s not even mildly challenging. The only hard part is getting the user to click on that page with the payload, but even that should still be kitten play in almost all cases through web-boards, spear phishing and the like. Getting people to click on links is insanely easy. Maybe I’m not getting the difficult part. Also, that is a terrible way to think about CSRF - it’s not always about trust, it’s just about getting another user to commit an action on your behalf. Trust is only involved in some instances of CSRF - there are many many examples that have nothing to do with user credentials.

So, based on what I’m seeing in my work I don’t think CSRF is as big of a deal - or perhaps I should say as top of a priority.

No, not top priority compared to something like SQL injection or command injection or something. But yes, it’s very much a big deal. Last week I did an assessment where one of the CSRF attacks would allow me to create a new admin user in the system. A huge percentage of the fraud on the Internet (TOS fraud, not actual hacking) is related to CSRF abuse (click fraud, affiliate fraud, etc…). We’re talking about hundreds of millions of dollars lost to a single exploit and only in those two variants. Like lots of exploits it totally depends on the problem at hand. Sorry, folks, CSRF is not getting downgraded because a piece of software can’t find the issue for you.

.EDU Hacks And Ambulance Chasing

Monday, January 25th, 2010

I struggled a lot with this over the last few weeks as I thought about it more and more. I’ve known for a very long time that the SEO guys were hacking .edu websites to increase their pagerank for keywords. By getting .edu (which ranks higher than .com for instance because the domains are old and highly connected) to link to a site with the right keywords, Google is tricked into thinking the site is of higher value. Yes, Google’s algorithm really is that simple to get around, which is why there is a lot of garbage in their index now. It just took a while for the bad guys to get a large enough mass of hacked sites.

So I started messing around with search strings that would help me identify highly probably hacked sites and poof - within a few minutes I had dozens upon dozens of high value compromises: viagra cialis phentermine

There are millions of variants of these keywords phrases and their ilk across far greater masses of domains, but this should give you an idea of what’s possible. Some of them are truly amazingly bad. So I took it upon myself to start emailing a few that weren’t on this list but that were just as bad. You may or may not be surprised that I got almost no responses whatsoever. In fact, I only got one that was accusing me of spamming and/or ambulance chasing. Ugh! Talk about a way to make a guy want to quit being a good citizen.

But this brings up an interesting problem. Who exactly are the Internet cops? Some would argue that stopbadware which is heavily sponsored by Google is the equivalent. But it clearly sucks - given that all these were found within Google’s own index. What is the right way to alert a company that they’ve been compromised? Is it even worth bothering? Is my own site going to be viewed as a spam site with links like those above? What an ugly problem!

DNS Rebinding for Scraping and Spamming

Wednesday, November 18th, 2009

Okay, last post about DNS Rebinding and then I’ll (probably) shut up about it for a while. If you haven’t already, please read posts one and two for context. As I was thinking about the best possible uses for DNS Rebinding I actually landed on something that is extremely practical for botnets, email scrapers, blog spammers and so on. One of their largest problems for most attackers/spammers is that they need to be able to scrape the search engines for targets and the only way to do that is to send a massive amount of traffic at them and if they use a small subset of machines they are also making themselves easy to block or subvert. Google typically tries to stop robots from scraping by showing a CAPTCHA. Wouldn’t it be easier and better if the attacker/spammer could use other people’s IP addresses? That’s the promise of DNS Rebinding, now isn’t it - unauthenticated cross domain read access from other people’s computers.

David Ross had a good post about how another practical defense against DNS Rebinding is using SSL/TLS, but since Google has opted not to secure their search engine, it becomes possible to use DNS Rebinding for its next logical use. Google hasn’t even fixed their other SSL/TLS woes so there’s pretty much no chance they’re going to secure the search engine any time soon. So DNS Rebinding gives the attacker IP diversity. An attacker can use DNS Rebinding to get other people to rip tons of information from Google without Google being able to block the real attacker. Since sites like Google do not respect the host header and they don’t use SSL/TLS an attacker can scrape information from these sites all they want - all the while using other people’s browsers. Now think comment spamming, polling fraud, brute force, and on and on… All of these become extremely easy and practical by burning other people’s IP addresses, instead of the attacker’s/spammer’s. Yes, DNS Rebinding is nasty, and unless the browser companies do something or every attacked web server on earth starts respecting the host header and/or using SSL/TLS it’s a problem that’s here to stay.

I know a lot of people think this is a complicated technique, but it’s really not that hard. It just requires some JavaScript (similar to BeEF or XSS Shell), a place to log data to log whatever the user saw when the attacker forced them to perform the action, a hacked up DNS server (like the simple DNS Rebinding server sample), a domain, a Firewall that is somehow linked to the attacker/spammer application and some Internet traffic to abuse. None of these things are out of reach for a decently skilled attacker. Anyway, I doubt it’s getting fixed anytime soon, which means DNS Rebinding essentially allows nearly free reign for attackers and spammers for the foreseeable future - and no one appears to be doing anything about it.

Half a Million HTTP Headers

Friday, September 11th, 2009

Over the last few months I’ve accumulated a half a million HTTP headers. It obviously contains a lot of garbage, but it’s still fun to toy with in aggregate. Of course there are lots of hackers in there - because it’s totally unstructured data. It’s some of the worst organized content I’ve ever seen, but it’s still fun to mess with.

If anyone’s interested, I threw it up on my other blog. A half a million HTTP headers might sound like a lot, but really, a lot of it is duplicated data. For instance search engine spiders, or RSS readers are going to look the same every time. But still, I thought it was interesting, just to look at.

Searchable SWFs

Tuesday, July 1st, 2008

I got forwarded this link today from businesswire about how Google and Yahoo are now going to be armed with the information necessary to look at and extract information out of SWF files. Ho-boy, here we go. The link was sent to me with the “bad juju” caveat, and I’m pretty sure I agree.

The problem is, like anything, if the search engines start pulling down rich applications that actually interact with the web application, there is untold issues that could arise. For instance, Flash applications have quite a bit of rich features in them, and some of that could be dangerous if they interact with back end applications. Also, if the word “test” appears in a Flash movie, does that mean it should get indexed? Or is it a frame that’s not visible, or off the side of the page, or whatever? What if it takes ten minutes to find that particular line of text or dozens of sub-menus? Are people really going to sit for that?

Do people really want to load a Flash movie when they query for things? I know I sure don’t! I’m already annoyed when I get linked to PDF files or .docx files. I think this just takes searching to a new level where people don’t actually want to go. Instead of crawling deeper and refining their search, the search engines are going to new mediums to stave off the people (like myself) who have argued that Flash isn’t a good medium for accessibility, usability and SEO. SEO is going to be off the table soon enough, leaving accessibility and usability.

But seriously, what’s next? Are the search engines going to decompile Java applets looking for text? As a side note, this should, at least in the short term, lead to a new round of Flash hacking, once it goes live. I’ll give a tee-shirt to the first person who writes a Google dork for internal Flash text that leads to exploitation.

Friday, June 13th, 2008

If the title of this post sounds awfully spammy, that’s because it is. Someone sent me a link to and today. Both of which are tied together into one system that allows someone to purchase a robot and the human CAPTCHA breaking necessary to create accounts in some of the largest social networking sites out there.

These include MySpace, Hi5, Facebook, Youtube, Gmail, and on and on… This reminds me a lot of XRumer which is also designed for the same purpose, but more for message boards and the like. Making hundreds of accounts, for spamming is getting more commonplace and accessible. Just plunk down your stolen PayPal or Google Checkout IDs and you’re off to the races! CAPTCHAs aren’t working folks - we’re just creating another micro-industry.

Buy Diggs and Votes on StumbleUpon

Thursday, January 3rd, 2008

There’s an interesting site called Subvert and Profit where the owner claims to sell diggs and votes on stumbleupon for traffic generation. Selling at $1 per vote/digg the goal is to monetize that traffic through various marketing campaigns or traffic arbitraging. Pretty interesting business model, and at worst it’s against the ToS of the various companies - it’s probably not illegal in any way. Blackhat SEM at it’s finest. It’s really not much different than buying paid links on websites if you think about it.

Some of the testimonials on the Subvert and Profit blog are pretty telling, such as, “the mind-boggling barrage of traffic which comes next, is nothing less than euphoric”. I can definitely agree that the volume of traffic from digg and stumbleupon, as well as reddit dwarfs slashdotting in our experience. Traffic arbitrage is here to stay, as long as the margins stay there. Pretty interesting!

Google Text Ad Subversion

Thursday, December 20th, 2007

There’s an interesting article over at ZDNet that explained that Google’s text ads are getting subverted by trojans on people’s machines to get them to click on other people’s ads. It wasn’t clear what those ads were, exactly, but there you have it. I see this kind of thing as a clear path for future monetization - similar to how bad guys are adding extra form fields into forms via malware to gain more information about your identity. Very clever, and easy to do.

This is different from when Google’s ads were spreading malware but has the same basic purpose. Ultimately getting code on people’s machines is the best way to get control of the machine and ultimately make money off of it via spam, clicks, or whatever else they come up with.

Another Fun SEO Blackhat Spam Tactic

Wednesday, September 19th, 2007

Searching through spam can be fun and annoying all at the same time. I found this beauty in my Wordpress moderation queue and thought it was worth a mention. Here’s a spam URL:

If you think about it, it’s a fairly ingenious tactic, using multiple sites to help your SEO. Firstly, they get me to link to a site (typically theirs, but in this case, it’s CNN, who is a trusted domain). Then CNN spits out the results (which would be there if Google hadn’t already nuked this site out of their index). The search engines follow their own results and give them link value. Very clever. No idea if it works or not, but it’s clever.

Facebook Says You Should Not Expect Privacy

Wednesday, September 12th, 2007

If there are any people left who think social networking is a safe place to enter your information I think this is a pretty telling story. Times Online has an interesting article on the latest move by Facebook regarding information that previously was inaccessible to search engines. Guess what? They’re going to make it publically accessible. It’s like people never learn (remind anyone of the AOL search query fun?). Okay - to be fair they said it’s only going to include, “basic details, including names and photographs” available.

I’m not sure I agree that it makes ID theft easier as Keith Reed, of Trend Micro said in the article, but it may make recon easier. I’m not saying that that’s not bad - because it very well may be bad, but it’s not as bad as it definitely could have been. But it’s a slippery slope. This quote caught my eye regarding Chris Kelly, Facebook’s chief privacy officer:

He suggested that internet-users could no longer expect to remain anonymous online, but could control only the amount of information about them that is available on the web.

I’m not going to disagree with the reality of the situation, but is it really okay that a social network takes this stance? Shouldn’t they instead be saying something like, “While we cannot guarantee your privacy, we will do everything in our power to insure our consumers have the highest level of privacy we can provide”? Granted, in the end it’s all about the dollar signs. They need to find a better way to monetize their traffic, and a lot of that means they need more users, they need to use the data they have better, and they need the search engines to start sending them more search engine traffic.