Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'spam' Category

Social Networking Corporate Security Compromise

Tuesday, August 15th, 2006

At one point or another I think I’ve been a part of almost every social networking site I’m even aware of. I really hate them, let me just tell you. Loath is a better word. Loath. Anyway. Here I am on LinkedIn loathing life, but one of my previous co-workers and I were making a game out of who could get the most contacts. Don’t ask me why, I really don’t know. At first I was playing fair, and then at one point that he started pulling ahead I resorted to adding my email address to the title so that people could add me at will. That’s not super interesting. But then it occured to me as I started getting requests from my co-workers, this is extremely game-able.

Personally, I’m not going to go messing around on LinkedIn, because most of the people I am networked to happen to actually know me and know it was me who was messing with them (and it’s not really my style anyway) but it’s a very real problem. You can send personalized requests to millions of users (spam).

“Yes, RSnake, but how?” Well, at one point I used to work for a company that was bought by a company and that company was bought by another company and that company was bought by another company. So it’s very difficult to figure out who you worked with because people left at various stages of the four companies, so you have to add yourself as having worked at all four companies to find everyone. But wait, why can’t I add… ANY company? I can!

So let’s say I want to make chummy chummy with a bunch of Google folks? It’s just a matter of saying I worked there at some point and adding enough people before people start adding you back. Free access to work email addresses of every major company! And the best part is I don’t have to say I continued to work there, I can then delete the fact that I pretended like I worked there and move on to the next company. Ouch.

This is clearly not LinkedIn’s idea behind this function. They don’t make money when you spam their users, and if you do, people will start abandoning the site right and left (meaning that would be one less site for me to visit every few weeks when I get one more peice of mail from someone adding me or asking me to get in contact with someone else - wouldn’t that be terrible)? So how would you detect something like this if you are architecting your own website? It’s a session variable that leaks too much information about it’s users that allows you to get in contact with them much easier than you would be able to normally.

I’m not aware of a web application scanner on earth that would find something so strange, but indeed, if you want to start spamming someone directly, or issuing targeted viruses/worms to mega companies, this is a perfect conduit for finding people in these huge companies, and targeting them directly. Remember our JavaScript scanner? “Hey, Joe, check out my new company, I just went to, I’d appreciate any feedback you could give since I know this is your area of expertise.” Even if they don’t know you, 9 out of 10 times they’ll click, and you’re in.

Social networking can lead to corporate security compromise. In the information age, social networking feels like one of the largest holes in online security.

1 in 10 Users Have Had Their Identities Stolen

Monday, August 14th, 2006

There’s an interesting article that was published a few days ago in the BBC business section on identity theft. It struck me as amusing that they focused on offline causes of identity theft in the same breath that they were talking about online fraud. In my mind they are really night and day.

Then I was gone over the weekend and there was something on the news about Al Queda hacking into non-profit organizations and routing charitable donations to their accounts to fund activities. Now wether that is all hype or not, it’s a scary statistic. If you think you are donating to the red cross it’s pretty inconcievable that you are funding international terrorism. But when I started thinking about it, it made a lot of sense.

As a peice of anecdotal evidence I fit their offline demographic as a tad nomadic. I’ve moved several dozen times in the last ten years and in every case I ended up getting mail from people who had lived there prior to me. Sometimes it’s something as stupid as a magazine, but other times it’s social security information, tax records or otherwise super sensitive healthcare information. Scary! Not that I would ever do anything with that information but it’s concievable that it would be.

The marriage of offline and online fraud is an interesting proposition. I was talking to a Pakistani phisher at one point who was telling me how he actually walked down to the local ATM to withdraw money from the fake credit cards he had made from user information. In fact, he was convinced that the physical security of the ATM was the biggest flaw in the whole part of the phishing scheme. I probably wouldn’t agree with that, but it’s an interesting point.

Because the physical infrastructure isn’t there, the ATMs in remote countries cannot make real-time decisions based on information presented them at the terminals. So therefore all the information they have must be delt with at the time of the transaction (or shortly thereafter, as bandwidth and time permit). Of course batch settlment at the end of the day is a requirement, and in some cases a dedicated phone line is availible, but certainly not in all cases.

The physical reality of security is an overlooked portion of the web application. Granted, the international terrorism is a leap but that is the physical manifestation of an online security flaw. When the homeland security office starts saying “Patch up to stop terrorism” I’ll be amazed, but it’s not that inconcievable. Especially if you consider how many machines are compromised and used for hosting phishing sites, or used as bot armies for spam which propogates identity theft. The secret service is the arm that monitors and goes after the 419 nigerian spam so the presidential arm realizes that identity theft is one of the greatest threats to national security, and if web application security flaws encourage identity theft, the government should have particular interest in patching application security flaws. Quod erat demonstrandum.

SES SEO News

Wednesday, August 9th, 2006

So I have an insider at SES who has been reporting back some interesting things that came up during the conference there. Of particular note was some of the spider topics that came up that are particularly relevant to some of the search engine spider mapping that I’ve been doing (I haven’t talked about those projects on this blog so most of you won’t know what I’m talking about, but bear with me). For search engine optimization (SEO) this has a lot of relevance, especially for the blackhats.

So one of the points of particular interest was that the search engines are now considering adding some sort of certificate to their engines so you will know which engine is real and which one is fake. People fake browsers often to see what competitors are doing (no, I don’t do any of that on my sites, do don’t waste your time).

But this is relevant for being able to detect which bots are real and which ones are fake. That could have major impact on fingerprinting valid browsers, instead of current techniques which involve reverse DNS on IPs to see if it matches the host domain, or User-Agent detection (neither of which I’ve ever felt are particularly great at catching everything with no false positives. It’ll be interesting to see which companies do what. I think it’s a ways off before we see this implemented in any practical way, but it sure will make spamming robots more reliable.

Another interesting thing that came up was that one way users hack into websites is by looking at robots files to see if there is any information there that might point the hacker to a more useful location to attack. A concept of using IP delivery came up where you can deliver a robots.txt file only to robots from IP addresses that you want stopped. It sorta feels like a chicken and egg sort of thing where you have to know they are a robot before you can tell the robot that you don’t want the robot to do stuff. It also feels pretty exploitable, depending on how it is delivered. For Google, you can use Google’s translation service, or better yet, here is a Google cache of Microsoft’s robots.txt file. Nice try.

Then there was mention of a way to do IP delivery to the spider and give the user a “nocache,noindex” version, so they won’t see what the robots see for the meta descriptions so they can’t rank as high, even if they steal every word on the page. Again, exploitable, and obviously so via cache. So then Google apparently said it doesn’t penalize people for having “noindex” nad “nocache” on your pages. It just happens that both super good guys and super bad guys happen to use it. So it might hurt you in terms of heuristics, but it sure won’t kill you. Sounds like music to spammer’s ears.

AOL Sponsors Spam Domains

Wednesday, August 9th, 2006

Well, as if AOL/Google couldn’t shoot themselves in the foot enough this week, AOL/Google announces their intentions to open a free email/domain gateway. Tsk Tsk. What on earth are they thinking? Free? Email? Domain? Are you kidding me? You might as well fly a banner over the sendmail conference asking people to start using you as a spam/SEO gateway.

They announce their intentions to build this out in September. So what I’m more interested in than anything is what they intend to do to secure this horror they are building. CAPTCHA? Sure. Identity/background checks? Maybe, but phishing can provide plenty of those. Wow, just wow. Obviously their intention is to compete with Yahoo who is introducing $1.99 domains and has had free webmail forever. They are also have a free phone service to compete with Skype and Yahoo’s VOIP technology.

Something just freaks me out about AOL doing this. AOL has not historically been good at security. I asked a current AOL employee at one time with their fraud loss rates were, and he told me (not under NDA) that the numbers of compromised users are “in excess of a percentage point.” I asked him how much in excess, and he responded, “No comment.” Let’s just say it’s 1%, even though he said flat out that it was above that. If they have 20MM users, that’s 200,000 compromised accounts, with probably compromised identities, and therefore phone numbers, addresses, and who knows what else. Do they really think that this is a good idea? Those same people are the ones who have the most to gain by phishing. And how do you propagate phishing? Email! All a vicious cycle.

Not to mention the possibilities for SEO spam. Free domains? One of the biggest problems for SEO is getting cheap domains. Well there you have it folks. There’s nothing cheaper than free. Feel free… spam all you like. I can’t wait to see how their ToS reads, and see how they intend to protect against that spam. I wonder what their hosting will look like too. Maybe something similar to pages.google.com (I’m also not sure why they are offering competitive services to a major shareholder’s applications). It all seems very odd and poorly thought out.

Oh well, at least the browser companies are starting to act more intelligently.

Google Spam Redirects

Monday, August 7th, 2006

I’ve been gone for a few days and one of the very first things I find in my inbox is an email that apparently wants me to click on a link. That link is going to Google. That link is a redirector. That link is obfuscated with URL encoding. Who knows what’s on that link! I’ve learened to distrust Google links, so I’m smart enough not to simply click on it without doing some investigations first. Let’s Look at the message, shall we?


I changed the unique string at the end, but otherwise this URL is intact and working. What is this? Well, by golly, it’s cialis/viagra spam! What have we learned? Google links are not to be trusted. Why would you allow your infrastructure to support spam redirection in emails? Should I start adding www.google.com to my anti-spam engines? Maybe to my content filters? I hate to say it, but I think I called this one.

IP to Virtualhost Lookup

Thursday, August 3rd, 2006

Okay, I’m just totally in love with this post by Jaimie Sirovich over at SEO Egghead.  He exposed a function that I’ve been wanting for a good long while.  Some way to do IP to virtual host lookup.  His solution, similar to my Cname lookup, is to use a search engine.  I wasn’t aware of this flag in MSN, but apparently if you query it for IP addresses it will do exactly that.  Well, nearly exactly that.  I also points out any domains that are 301 redirecting or meta refreshing to your site.  Strange!  But whatever the case, it gets you 90% of the way there with not many false positives.  Those false positives can be identified and removed by simply doing another lookup on those domains and seeing if they match the IP.  Pretty trick!

In this way you can accurately identify SEO spammers, and virtual hosts.  This is particularly useful for penetration testing because often since they are on the machine the hardened host is the main one.  The softer hosts that reside on the same machine can be compromised and therefor giving you access to the same web application (and probably even the same apache process).  Very scary stuff for anyone doing lots of hosting on single IPs.  Thanks, for the post, Jaimie!

Finding Cnames Via Google

Wednesday, August 2nd, 2006

By way of SEOEgghead I ran accross Matt Cutt’s google videos.  I’m really surprised I didn’t see this before so thanks, Jaimie!  At first I thought it would be a lot of beating around the bush about best ways to make your site rank using better HTML or some other nonesense, but instead he beat around the bush on a number of other issues.  I’m actually really glad I watched the video talking about a guy who set up thousands of domains all linking to the same JavaScript.  Talk about an blackhat SEO newbie mistake!  But Matt Cutts also mentioned a lot of domains on a single IP address.

Wouldn’t it be great to have a mapping of virtually the entire internet, where you could see every hostname -> IP address pairing?  Granted, it would have false positives like virtual hosting services, as he says, but come on!  Talk about predictive!  Sure, a few dozen domains may be possible.  Especially for hosting providers, but if I have hundreds of domains that look even vaguely shady, that’s a huge indicator.  Even if they aren’t the same IP, but within a class C network, that could still be highly predictive.  IP addresses have come back to haunt us!  Everything has to be routable, and if Google has to know where you are to index you, and they have any interest in detecting spamming, of course they’ll do a mapping like this.

I had always wanted to build something like this myself, but to build a spider like that would take more horsepower than I’ve got in my rack at home by far, and a database with some serious space.  We’re talking about millions of hostnames to IP addresses.  It gets harder because that has to stay up to date.  Six month old data is practically worthless when you are talking about spamming domains which may only stay up for a week or less in some cases.

Then I suddenly remembered a conversation I had a few weeks back with one of my readers, who shall remain nameless for the time being.  He asked me a simple question, “How do you find all the cnames on a host?”  Cname (or subdomain) spam has it’s ups and downs in the SEO world depending on the day of the week it seems like and depending on which search engine you’re talking about, but it’s a pain to correlate it all together, no matter how you slice it.  It’s also useful for auditing websites for vulnerabilities since cnames almost always reside on the same host, or at minimum use the same backend.  I thought for a few seconds and I came up with a solution.  Use the search engine itself!  Let’s say I want to find all the cnames on Google.  Let’s start with a simple query:

site:google.com -www

That gives us a list of links back, none of which contain “www”.  So now I see things like sketchup.google.com and finance.google.com and eval.google.com.  So let’s make a note of those and query again:

site:google.com -www -eval -sketchup -finance

And then you take what is left from that (which may include things like sub directories which you can remove as well) and remove them:

site:google.com -www -eval -sketchup -finance -google.com/answers -google.com/trends -browsersync -desktop -toolbar -earth -picasa -toolbarqueries

And so on…  Until there is nothing left to search.  In this way, you can get all of the cnames of a server, with relatively few queries.  Of course, Google is a huge site, with lots of cnames, so this technique is pretty tedious with them, but with smaller sites you can go through this pretty quickly.  This still won’t help you do an IP address to domain name lookup, like what Google has access to, but it does help you do your own investigation of cname based spam.  This technique came in handy finding some of the other domains on one spammer site, that you may have remembered from one of my previous posts.

Finding cnames can help isolate spammers, but wouldn’t it be nice if we could somehow get access to all the IP address to hostname maps?  There’s got to be a way somehow.  Hmmm…  I’ll have to think about that one.

Writing Steganographic Messages in Spam

Tuesday, August 1st, 2006

I ran across this security link a few days ago and I thought it would be worth sharing with my readers. It’s a way to steganographically encode spam with text. This is actually one of the more ingenious ways that I’ve seen to encode messages. I mean, we all get insane amounts of spam, so what better way to send information than by spam. One of the key ways to tell that there is a covert channel is by looking for anomalous traffic, but spam is so common and it is so common that it comes from everywhere that it is very difficult to detect.

That said, there are a few obvious problems with this. The first being you have to either set up an agreement with the other party to know which messages are spam, or that party has to run all of their mail through the steganographic filter. Using a resource on the web, is the same thing as sending it plaintext (unless you use the SSL connection). But now you are risking that the website itself isn’t under federal wiretap or something else. Also, you have to worry about the spam that you are expecting actually making it through your spam filters. So unless you have an account that simply sits out in the middle of the DMZ with no protection there is a high liklihood of loosing the spam entirely.

You also cannot use the spammimic tool as an API (as far as I can tell) meaning you have to send all your traffic over HTTP/HTTPS to that website which sets off huge alarms for anyone who was eavesdropping. And last but not least, as with any steganographic system, once you tell people that it exists, it is almost completely useless. That’s the problem with staganography, you can never tell anyone about the best ways to hide data, or it’s a broken system.

Still, interesting idea though!

Popup Blocking

Sunday, July 30th, 2006

I ran across an interesting link to a page that tests your browser for popups. It requires that you run Java. I was actually a little bummed that not a single popup went through on my browser. But then again, I run QuickJava (similar in principle to Noscript, but better for my needs) so it’s not that big of a surprise.

But it got me thinking.  This really only tests conventional popups.  It certainly doesn’t test for things like my Most Evil Popup Ever(TM). Frankly, I wouldn’t expect it to, because it isn’t a normal popup, but as technology evolves, I think less and less conventional means for delivery is going to take over.  I feel like there are probably other forms of these types of popups that could work better, but I haven’t put any time into thinking through it, so it’s probably best to leave it at this.  Anyway, cute link if you aren’t sure how vulnerable you are to popup annoyances.

Selling Exploits for Cash

Thursday, July 20th, 2006

id just sent me a link to Dark Reading talking about the controversial prospect of selling exploit code for cash.  It has been something I’ve talked about in the past, and actually I was alerted to it by OptikLenz as well.  The website is called Zero Day Initiative (it has been live for about a year now).  The black market is buying “weaponized” exploits that require little to no skill for up to 2-5 times the highest asking prices of these websites.

Call me crazy, but this is a huge market place now.  Considering that Phishing is a billion dollar industry, who cares if they have to spend $50k for a remote windows exploit to help them host phishing sites?  Or $10k for a new spamming technique.  It’s a small price to pay when the ultimate gain could be tremendous for the assailant.

And do you think 3Com or Tippingpoint are doing this for the good of humanity?  No, they are reselling it via their contracts with their customers to make more money off of the exploit code.  The economics of hacking are beginning to move into the free market economy and away from the socialist free-for-all of the last decade.