Paid Advertising
web application security lab

Archive for the 'SEO/SEM' Category

Adblockplus Workaround

Monday, February 12th, 2007

I’ll probably regret this post at some point, and I have to caveat this by saying I love adblockplus (it’s a dream). However, it is also flawed. Whenever you do straight string compare you are risking missing something. Well it just so happens, that the string comparison required when you are looking up something like you are missing one obvious way the client can request the JavaScript from the page - using the IP address. But that alone isn’t magic. Anyone can swap out an IP address… and by the way, that alone won’t work because of the way Overture’s ads are built. Not only do they use for the initial JS lookup, but also for the subsequent iframe that contains the ads themselves.

Okay, easy enough… first we take the JavaScript and look for any variables that are set by the Overture JavaScript. We find one and then we check to see if it has been set. If it has, you can see that the ad is already there. If it hasn’t, the ad is not there, and you can write your own work-around. The reason we do this in this order is to make sure we don’t end up with two ads on the page (and we’d rather use the DNS if we can since that has built in IP failover).

Here’s the demo. This could be very valuable to anyone who is plagued by their users who turn off ads in the SEO/SEM crowd. Hint, hint, whitelist this domain, so I don’t have to mess with you guys. ;)

Malicious SERP Arbitrage Lessons

Friday, January 26th, 2007

I spent the better part of my free time for today putting together a rather sophisticated search engine result page arbitrage tool. No, I won’t release this one. Partly because it sucks, partly because it requires that I allow other companies to run JavaScript on my domain, partly because it requires redirection, and partly because it’s easy enough that anyone with enough skill could do it themselves anyway. The point is I did it as a demonstration for a potential client, and there are some lessons learned. This is pretty nasty tool for blackhat SEO/SEM types.

If you don’t know what I’m talking about, it was an old trick I talked about revisiting, which is making the back button on browsers change functionality (popups or redirection to other sites). Only my version actually mimics the search engine the user came from.

1) The arbitrager must understand that the user is coming from a search engine. There is more than one search engine. So they must code for each one that they want to steal traffic from. This alone can be a bit of a nightmare.

2) The traffic arbitrager must detect which links the user has already been to so to make sure that if they are to click on those links again they end up on the page they meant to go to. This will make it fare less likely that the user will notice the trick. This is harder than it sounds because each site has a different style for the a:visited tag. And btw, case matters if there are any letters in the color (EG: 4e4e4e is different than 4E4E4E4E).

3) The site must take into account crazy JavaScript and Style sheets that all the search engines SERPs add to each of their pages. That can really mess things up (and definitely change the layout slightly). I defaulted to the no JS view that looks close enough - less likely to cause errors.

4) The first time the user visits you need to redirect them to the original page they meant to go to in question. If the cookie (state) exists where they have already been there, when they hit back they see the fake SERP.

Ultimately I think this is a pretty powerful tool for malicious arbitragers. Pretty nasty, actually.

Alexa Fallacy - As if Anyone Thought Otherwise

Thursday, January 18th, 2007

Okay, no more theories, no more guesswork, I finally have proof that Alexa data does not jive with actual real internet traffic patterns. Well, at minimum it doesn’t match what they claim it matches - it does prove other interesting facts, but I’ll get to that in a minute. My Alexa rating is pretty high. does tend to get quite a bit of traffic (somewhere around 11k-14k unique users a day visit the site). Most of my traffic is comprised of the security industry, but I do have quite a few SEO readers - especially the blackhat SEO crowd. That comes from a long standing bridge between web application security and SEO and it also happens to be that I’m one of the few security people who talks about both. For whatever reason I have a lot of webmasters who aren’t particularly interested in security who read my blog as a result of that bridge.

One thing that most SEO people (and indeed webmasters in general) have in common is that they happen to all have Alexa (or here for Firefox users) installed on their browser. It could easily be seen a spyware because it does report on where you are visiting but it also gives you the relative page rank of the page for your troubles. This can easily help you assess if a site is new, or if it’s old, if it’s got a real following of people, or not, etc… It can also tell you if the domain gets traffic to other cnames (if that’s interesting).

But there’s been a long standing theory that Alexa data does not actually indicate true ranking. Finally I was able to prove it (at least to myself - maybe other people proved it to themselves before now, but I haven’t seen any hard and fast stats until today). So here’s what the current graph of my Alexa ranking is over the last several months:

Click to enlarge

You’ll notice the two biggest spikes on July 30th and January 16th (just a few days ago). So you would naturally assume those are huge spikes in traffic, right? Well let’s look at the significant events of those two days in particular. On July 30th 2006, the site was Slashdotted. To most people that’s probably a pretty significant event and you’d expect to see a huge jump in traffic. Let’s look at what it really did:

Click to enlarge

You can plainly see very little traffic change for the 30th. Sure it was up a great deal for the average weekend, but it was really not much of a spike, and nothing like the traffic levels you’d expect for the 4,000th biggest website on the planet, right? Could it be that SEOs also tend to read Slashdot? Okay, that’s really just a theory, so let’s put it aside for now. Now let’s look at the events of just a few days ago (the 16th).

Click to enlarge

You can see that I did get a fair amount of traffic but it was nowhere near my highest day this month and I certainly didn’t jump up by more than a few percent of my normal traffic load (the 17th proved to be a much higher traffic day and the 8th was the day the firewall died on us). Where did all the Alexa traffic come from that made next to no increase in the number of users we get in a day? Well as you will remember, just a few days ago a self proclaimed Whithat SEO said he intended to hack a bunch of sites. was named in that list of sites (despite the fact that I really do not consider to be an SEO website). was linked to by his website, and many other SEO’s picked up the list as well. In essence, every SEO that would possibly click on a link to a site named “” did click - and all in one day. Thus you can see only a minor increase in traffic on the 16th but a huge spike in Alexa ranking.

Quod erat demonstrandum.

It is interesting to note how many SEOs use the Alexa toolbar. I bet that database would give away a lot of SEO secrets if it were ever compromised. Spyware never sounded so good.

Someone Wants to Hack All Big SEO Sites.

Monday, January 15th, 2007

Someone named Fuckingpirate posted a very new blog today stating that he intended to hack a lot of the biggest SEO sites out there. Funny that I am somehow considered one of the biggest SEO sites since I rarely post about SEO (yes this is the second post today on it, but today has been the first in months).

Edit: Site is already down… so much for that game!

Edit: Site has moved to blogspot and there is a copy of the old site here. Additionally wolf-howl has been compromised.

Search Status SEO Detector

Monday, January 15th, 2007

I know I haven’t done many SEO posts lately, but one of my co-workers is leaving (who specializes in SEO) and I thought I’d pay him a little tribute. I spent a few minutes throwing up another tool into my vulnerability lab that detects the SEO Firefox plugin - Search Status. In particular it helps detect if they are attempting to find rel=nofollow put on links. That’s important for SEO folks because they know which pages will give them higher page rank if they put a link on the page.

Search Status has a function to enable highlighting of nofollow tags, which is highly useful for improving page rank (knowing where and where not to post if you are interested in backlinks on web-boards for instance). This will help ferret out those pesky SEO experts and either deliver them different content or change the links to remove the rel=nofollow modifications so they aren’t visible. A cute trick based off of Jeremiah’s CSS hack (which I also put back up online in the lab).

Robots.txt Just Isn’t Working For Me

Wednesday, December 13th, 2006

Dear Search Engines,

I’ve worked for huge companies for many years. Each have their own unique issues. One issue they all have in common is you. You crawl our sites and expect us to know better and be able to react to that in real time. You expect us to know what we don’t want crawled and you expect us to be able to conjure up a robots.txt file put rel=nofollow Meta noindexes or whatever to satisfy that need.

Another issue that all the companies I’ve ever worked for have is slow time to develop and release anything to the website. If I know there is an issue a) I have to explain it b) get buyoff from engineering/business units/execs c) get engineering to build the document d) merge the code e) QA it f) wait until the next build/release and then poof, just like magic we may or may not have fixed the issue depending on if QA and engineering did their job right. If not, the cycle continues.

Here’s a crazy thought. Why don’t you let us upload our robots.txt to you guys? Make us put some crazy hash in a file somewhere to prove we are who we say we are, but let us tell you immediately what no not index, what to not follow or otherwise waste our bandwidth doing. I ran into a situation today where any reasonable person could have immediately told you how to fix the issue, yet it may take weeks or months to fix the problem instead of one guy in a few minutes uploading one file to tell you not to do XZY. You allowed us to do things like upload site-maps, why not let us tell you what we DON’T want you indexing? I know, where do I come up with all this crazy talk?

Tell you what, search engines, I’m going to let you think on it while I grind my development resources to a halt trying to keep you off of certain areas of my company’s site. Let the patent wars begin.



Professional Search Engine Optimization With PHP Book

Tuesday, November 14th, 2006

I guess I get to add another book to my X-mas list for myself. Jaime Sirovich wrote me about a book he has been authoring for quite a while now. I knew he was writing it but apparently they have been moving ahead a good clip and are already ready for pre-orders.

The book is Professional Search Engine Optimization with PHP and is designed to teach the fundamentals of SEO. Jamie thought I would be interested in the chapter on Blackhat SEO, since that applies to some basic web application security concepts, including HTML injection (as opposed to JavaScript injection which traditionally isn’t read by search engine spiders since no one searches on JavaScript). Pretty interesting with some good code snippets. If you’re interested in SEO, I’d add it to your list of upcoming book purchases.

Google Indexes XSS

Thursday, September 28th, 2006

Today Ghozt on the XSS forums found a rather interesting link while searching google. He’s found proof that Google will in fact index XSS. The link that Ghozt found was actually not a working XSS exploit, but that’s irrelevant. In this case, if it had worked, Google would have indexed it and shown a working exploit. This is the first time I’ve seen 100% proof that Google will index cross site scripting attacks. Cool!

Click to enlarge

We all thought it probably was true, but until now I hadn’t seen any verifiable proof of such. Sure enough this was indexed from a blog post by Nitesh Dhanjani, here and here. So perhaps there is some ranking associated with the potential importance of such a link, and therefor Google will only index an XSS if it is coming from a trusted host (raising the importance of persistant XSS on trusted domains - like .edu TLDs as Jamie was talking about). Either way, it’s pretty exciting to see a theory turn into proof.

WordPress SEO CSRF

Tuesday, September 26th, 2006

Well, it’s with a bit of a saddened heart that in the first few minutes of checking through the WordPress code for CSRF I found my first vulnerability. I sat on it for a week or so until I had time to thoroughly test it, and sure enough, WordPress is vulnerable to SEO related CSRF. …and then the splogging community rejoiced. This isn’t such a bad issue from an XSS perspective because WordPress does a pretty good job of protecting it’s users from being able to inject malicious JavaScript, but it definitely does allow any content you desire to get indexed.

So here’s the scenario. I’m a webmaster with a blog (I actually am one, so it’s not a stretch) who happens to get lots of blog comment spam. Like a good webmaster I mark it as spam instead of deleting it because I don’t want them to be able to submit the exact same content again. The spammer themselves can actually see the unique identifier of the spam post by looking at the page. They see something like “comment-3182″. They file that away for later. On with the spamming they go. They can spam several thousand times just for good measure. When they’re ready to unleash the spam here’s the next trick.

When the spammer is good and ready they set up a realistic sounding post linking to them. It will show up in either technorati, or if they do a trackback it will actually show up as a comment as well. Stupidly, I’ll go and click on the link and end up at the spammer’s website. Ensuring that you only show the command to the right person can be done through IP delivery if you know their IP address, or using referring URLS. Embedded in the website is an invisible iframe with content that looks something like this:

<form name="f" method=’post’ action=’′>
<input input=’text’ value=’update’ name=’action’>
<input input=’text’ value=’approve’ name=’comment[3182]’>
<input input=’text’ value=’approve’ name=’comment[3183]’>
<– Any other spam comments –>
<input input=’text’ value="Moderate Comments &raquo;" name=’submit’>
<input name="s" type=’submit’ value=’submit’ >

While this isn’t particularly interesting from a web application security perspective it is interesting when you consider what it can be used for from the blackhat search engine optimization community. One of the things they are most interested in is persistant XSS, or rather links that will stay on the page indefinitely. While WordPress does add “rel=nofollow” to each tag, it doesn’t stop the actual text from being indexed. Splogging is not particularly effective due in parge part to the search engine’s respecting the rel=nofollow tag, as well as for moderation of comments (which we can now get around after the fact). Mitigating factors include deleting the content for good, and building better session/flow management.

From a web application security perspective things like changing the administrator password are vulnerable only if a referring URL can be spoofed (which is possible using the Flash header spoofing trick that Amit came up with). Because the state is dealt with by spoofable items (referring URLs) and CSRF susceptable tokens (cookies) WordPress is ultimately vulnerable to quite a bit more than just comment spam. (The same is true with submitting new posts, which by the way actually could enable XSS attacks as full HTML is allowed). So turn off Flash if you’re an blog administrator until something gets fixed.

XSS Vulnerability in Democracy Wordpress Plugin

Saturday, September 23rd, 2006

Aaron Brazell just published an interesting post talking about a cross site scripting vulnerability in the democracy plugin for Wordpress. Almost immediately after posting Democracy published a fix to the vulnerability. This is a pretty interesting flaw that I think needs a little more discussion.

Firstly, I think it’s important because Wordpress is not to blame for this issue. No, it’s an independant plugin author who is to blame. It’s very similar to the problems in Firefox that we’ve been exploring over the last few weeks - again, not Firefox’s issues, but the plugins themselves. It’s a little concerning that there is less and less information about system security as more addons are built on top of platforms. There’s next to no way to tell if you have hurt your security integrity without thorough code review of not just the plugin but how it interacts with other plugins, etc… A daunting task.

The other interesting part of this is that it isn’t reflected XSS, but rather it is persistant. This is also pretty useful for search engine optimization (SEO). Splogging isn’t particularly useful anymore (having spoken about the issue with one splogger about it). But XSS is in (if it’s persistant). In this case it isn’t exactly cross domain, but by including a simple <A HREF=>keyword</A> they suddenly get persistence on the front page of whatever domain they are interested in, giving them a high pagerank link to their domain from the main page. That’s one of the worst XSSs I’ve seen for that very reason, even if it’s not used as XSS but rather just straight HTML injection.

For SEO it is far more useful to be able to have stored links on the front page than sub pages. And while the author has suggested an upgrade, existing installs will be slow to upgrade most likely, so there is still a large opportunity of existing install base for blackhat SEO types to use this flaw to their advangage. For the blackhat hacker types, they can do cookie theft and log into the administrator account. Either way, it’s nasty.

Thanks to Aaron for helping me test this issue.