Cenzic 232 Patent
Paid Advertising
web application security lab

Archive for the 'Phishing' Category

Content Restrictions - A Call For Input

Saturday, August 11th, 2007

In talking with the browser companies there seems to be more and more interest in content restrictions. For those of you who don’t know anything about it, let me just quickly give you the overview. Three or four years ago I was trying to find a way for my company to put malicious user generated input into a sandbox but still allow it to show up on the site. The obvious answer was use an iframe to isolate it. That, unfortunately, has all sorts of user experience issues. The first one being that you cannot tell how big it needs to be so you often end up with double scroll-bars which messes up printing, and causes links inside flash movies to only change the iframe instead of the whole page. Yah, it’s ugly. So I started looking for alternatives.

The first was talking about the concept of a re-sizable iframe. There are security implications with that which may allow the parent to know the state of the child, so that was thrown out by the Mozilla team. There may be tricky ways to bring that up but some of the other usability problems are still there so it’s not really ideal anyway. So the best alternative is to create something that tells the browser, “If you trust me, trust me to tell you to not trust me.” This is based off of the short lived Netscape model that said if a site is trustworthy you lower the protections of the browser as much as possible. Content restrictions was born. I submitted the concept to Rafael Ebron, who handed it off to Gerv. It went to the WHATWG, and that’s where it’s stayed for the last 3 years or so.

The Netscape model doesn’t work if the site you trust has lots of dynamic content. So by extending it with content restrictions makes a lot of sense for a few reasons. The first reason is that it puts the onus on the websites to protect themselves. The other is that it doesn’t hurt any other usability, because it’s an opt-in situation. Pretty ideal, actually. While I was talking with Mozilla last week they asked me to put together a list of the top things I’d like to see in content restrictions. They are eager to get started on it, but can’t promise the world. They’d like to hear the top two ideas and then work from there.

How you instantiate content restrictions is still up for debate - whether it be a new header pointing to an XML file, or inline in meta tags or a new HTML tag. I’m a little indifferent, except that I think it should be accessible both to people writing dynamic pages, as well as people who simply include HTML placed there by whatever means (FTP, mail, etc…). So it should probably be a hybrid of a few of those, but that’s a different discussion.

So there are two use-cases. The first is that the site wants to simply remove anything potentially malicious, which could include something like JavaScript but exclude things like objects, for instance. The other is that a site might want anything dynamic, but doesn’t want anything embedded off-host to get injected into a page, or any automatic redirection of any sort.

One thing is certain - there are many sites that don’t want any content to be placed outside of the user’s content. The beauty of an iframe is that CSS only affects what’s in the iframe, JavaScript can’t overwrite things outside of it, doesn’t have access to the cookies etc. The first thing I can think of that would be highly valuable to lots of sites if they were able to create a resizeable psueo iframe to restrict the content to a portion of the page, which would include styles (absolute positioning) as well as JavaScript access to the page.

Other possibilities include not creating a new DOM (no iframes, frames, or the like on the page between two places on the page). Another is no automatic redirection that is not user initiated. That’s a common problem because malicious users perform redirection to other domains.

A possible valuable tool for content restrictions would be to be able to limit what sort of functionality is between two sets of tags. The first example would be turning off any HTML tags that aren’t allowed to be rendered. The second would be to limit the event handlers to a pre-defined set (or removing them entirely). I’ve seen a number of situations where this would have been handy as a last resort.

Another thing I have been toying with quite a bit lately is the concept of XMLHTTPRequest. One thing that has always surprised me is that it allows more than it’s name implies. That is, if you request something that isn’t XML it still gives access to the page. It could be up to the page’s digression if XHR has access to anything other than XML. That would limit XHR to session riding, rather than being able to read nonces or other unsavory functions used in worms.

So I’d like to get people’s feedback. Those are some of my ideas, but I’m hoping people will have even better ideas as well. Once I get the top two ideas, I’ll submit those, and we’ll rank order the next several ideas and submit them as supplemental ideas for a later day.

Firefox 3.0 Address Bar Change Proposal

Sunday, June 10th, 2007

A few days ago Sylvan von Stuppe posted about a proposed change to Firefox 3.0 that changes the way the address bar works. I hadn’t heard this proposal, but it’s an interesting one. Basically they grey out the parts of the URL that aren’t the domain. Sylvan correctly pointed out that although that’s good for showing users that they are connecting to sites other than the one they meant to go to, it has nothing to do with the content on the page. XSS is still an obvious way around this, as the malicious content can be injected onto valid pages. According to Zeno MITRE is about to disclose that XSS is the attacker’s choice.

Although I should say that I do think this idea is a fairly good one, but there is at least one other problem with it. Almost all websites have IP addresses associated with them (except in the case of virtual hosts that also require a Host: header). Just because it’s an IP doesn’t mean it’s bad. I can’t tell you how annoying I think Thunderbird’s anti-phishing filter is to me always thinking every URL with an IP in it is a phishing attempt. That’s just not a good way to know if something is malicious or not. But I would like to see the consumer research that says people will actually use this and not be fooled by it. I’m always a little wary of “look for the ____” type security given how poorly the “look for the lock” security education has proven to work for SSL.

Cross Domain Basic Auth Phishing Tactics

Friday, June 8th, 2007

I’ve talked about this problem before - using basic authentication to phish users across domains. But it might be good to do a quick refresher for those of you who don’t know what I’m talking about. A bad guy can include a reference to an image on a domain that is protected by an Apache module, or protects itself. That then pops up a basic authentication dialog on the site that you want to phish credentials from. The only problem with this is that the basic auth dialog has the name of the URL in the title. Well Alex found a few potential workarounds to that issue:

I’ve found some nice bugs in Opera and IE (7.0), which could trick a user in thinking that he/she’s on the right server, ’cause the server’s hostname looks like what they do expect it to. Opera truncates the server’s hostname after the 34th character and adds three points “…” at the end. This could be overseen. I’ve reported that to the vendors of Opera and they don’t know a solution. Well, sounds very funny. The could display the whole string like other browsers do, but they don’t want to change their layout of the dialogue … They were not very happy with all my other suggestions I had (explicit warning message, etc.) for them. So, there will be no change in the future, I think. Due to the missing status bar (default setting) you can’t see where it probably came from => “Waiting for phishers.com …” (And if you go to enable it, there will be no output on the bar. *G*)

Don’t forget, that there’s no link you must click on. An embedded image is good enough.

(Use Opera for testing: http://testing.bitsploit.de/test.html )

The second bug, which leads to phishing is in MSIE 7. If you use IDN domain names like microsoft.de with a cyrillic, little o instead of a latin one, you won’t see the real hostname in the HTTP-Auth dialogue (www.xn--blabla.de). Only the status bar is showing the real hostname while showing the dialogue. That’s bad, but Ronald van den Heetkamp told me, that this shouldn’t be a big problem. (Don’t know how, ’cause IE7 ignores something like status=no and e.g. Firefox gives no access to rewrite the status bar string as a default setting.)

I’ve informed MS, but they didn’t respond so far.

The IDN thing is interesting because I’m sure if you were in the field a few years back this will sound familiar - people setting up fake websites that looked in every way like the target website, except one letter would be Cyrillic. That mostly affected Firefox, and Netscape (because it used the Gecko rendering engine), but now it looks as if IE might also run into problems. Not that I think a ton of people fall for this sort of thing, but even if it’s only vaguely useful, it’s still something we should consider as a workable attack vector.

APWG and OpenDNS

Saturday, May 26th, 2007

After reading a comment by David Ulevitch on a post by Dragos Lungu I was pretty interested in reading a new press release from OpenDNS on how they are “partnering” with the anti phishing work group (APWG). I actually laughed when I read it for a few reasons. Firstly, if you read Dave Jevans’ comment he says, “We are pleased to welcome PhishTank.com as a member of the APWG.” To me that seems less like a partner and more like a client. I couldn’t find any supporting words on APWG’s website at all to confirm a partnership in any capacity. To me it sounds like OpenDNS is simply going to consume data from APWG.

Secondly, this affirms what I was trying to get across in my comments on my post about the phishtank’s competitive nature with APWG. Although David Ulevitch never answered my questions posed to him in the comments, this pretty much sums up what I was saying. Unless these players start working together, they are only causing more churn in the industry as more companies have to deal with more anti-phishing aggregators. That in turn means that companies trying to protect themselves or their consumers have to build more APIs, sign more contracts or whatever, just to get the global knowledge of where phishing sites are. So, ultimately this sounds like a good thing, although I’m skeptical of how much a partnership this really is, given Dave Jevans’ comments. It sounds more like they are just a simple consumer/submitter, just like the other APWG members, but the press release may also just be poorly written.

.bank TLD

Tuesday, May 22nd, 2007

I suppose I should probably weigh on on my feelings on the .bank TLD proposal. I kept my tongue hoping that someone would come out and explain what they thought it would solve, and I’m glad I did. Mikko from F-Secure finally published a writeup on why it should go through to ICANN. It was actually a pretty well thought out reply. I’m not going to summarize the post - go read it and come back, I’ll wait.

Now that you’ve read it, here are my thoughts. Yes, .bank will solve some heuristics problems. No, it won’t solve all of them. Banks hiring external marketing departments, regional divisions, loan offices, etc… etc… that all are owned by the parent will not be able to afford their own .bank TLD and will not be protected. Piggybacking off the parent URL is an equally bad idea for XSS phishing attacks. And if the banks allowed external organizations to piggyback how wold that solve your problem of extended validation of the site? Anyone have any guess as to how much money external marketing companies spend on server sercurity? Anyway, it does solve a few issues for heuristics, but it also creates a lot more. (Does this sound at all like why companies were told to buy EV certs? Has that worked for them? Why are we doing this twice?)

Banks have spent a lot of time and energy into making online presences. They can’t switch over to a new TLD on a dime. Sure, they will because they are told it’s the right thing to do, but it’s certainly not an overnight process. How much money are they going to spend buying the domains, re-tooling their websites, re-branding them and re-educating their own staff and their customers?

.bank does not apply to some of the most heavily phished sites out there, like Amazon, eBay, PayPal, AOL, MySpace and a host of credit unions. I see where they are going with this, but it’s a slippery slope. Just because you get phished a lot doesn’t earn you the right to have a .bank TLD (because that is the exclusive domain of banks, of course). While it may earn you a right to have a .dontphishme TLD every site on earth that does electronic transactions is going to want that.

Probably my biggest problem with this, is that these companies each spend a ton of money in education, and promoting their brand. For them to switch their TLD would work against all those dollars spent, and ultimately wouldn’t prevent blind redirects, XSS phishing, or just plain old URL obfuscation. Yes, it would make detection slightly easier, but by how much? An order of magnitude? I highly doubt it, and even if it did, is the problem not being able to detect the phishing sites well or is our problem not being able to take them down quickly enough? I think it’s the latter, and I don’t think a .bank TLD or any other derivative is going to solve that issue.

While I applaud the creativity, I really don’t think it does enough to warrant it going through. But I have no doubt where there’s a will there’s a way and it will go through despite my opinions. I know people mean well with these types of proposals, but I think there’s a lot more going on here than just detection. Yes, detection does need to be improved, but there’s tons of ways around detection and phishers have not had resort to that (minus a few experiments).

To me that means we are a long way from having to worry about the detection portion of the attack and if people want to put a dent in it they should instead focus on building better extradition treaties and tougher international cybercrime laws with all countries. Currently it can take days or weeks to get phishing sites taken down because there is no political pressure to do so in certain areas. I believe people would be much better suited in solving the take-down issue than creating a new .TLD that excludes more phished domains than it protects.

Phishing Through Google (Yet Again)

Sunday, May 20th, 2007

This isn’t new, but a few different people sent me a link to how Google is yet again being used for phishing. Don’t trust those Google links! I hate to say I told you so but when Google fixed that one single redirect hole and left the dozens of others in place I warned that this might happen.

When you leave one redirect hole in place it doesn’t matter that you closed another one. It’s a mild annoyance at best to a phisher. So this will continue to be a problem until they are all fixed. People will continue to click on those links and the anti-phishing software will continue to not be able to blacklist them because Google doesn’t like to be blacklisted. Google is plenty happy to warn people not to click on other sites that may contain malware, though (sense some hypocrisy there?).

I’m hoping their executive management wakes up and smells the coffee. It’s something I’ve been saying for over a year now, and we are no closer to having it solved. Worse yet, it’s screwing over the consumers!

Phishing Social Networking Sites

Tuesday, May 8th, 2007

Okay, I had a lot of fun with this post. No new news here, but I was able to talk to someone who was willing to sit down and write out some thoughts from a phisher’s perspective. The phisher goes by the name “lithium” and agreed to answer a number of questions that have been on my mind for a while now. Huge thanks to him, as I think a lot of this is valuable information to the community at large, These are his words - unmodified:

How would you describe yourself? Age? Did you go to school? Interests?

Determined is the best word to describe myself. I’m 18 years young. Yes, I went to school. I left after high school. My interests are mma (mixed martial arts); fitness and last but not least..The internet!

How did you get your start in phishing? How did you get interested in it?

The typical scam mail that my parents kept recieving in their inbox. They were very poorly done! Yet in general they worked. So, I knew automatically I could come up with more efficient methods and have a far greater outcome.

How long have you been phishing?

I’ve been pishing since I turned 14. So thats, Nearly 5 years.

Do you have any idea how many people’s identities you’ve stolen so far?

Way over 20 million. Social networking worms really hit it off for me! I have so many hundreds of thousands of accounts to many websites I haven’t even got a chance to look through.

Did you need to forge any particular relationships with other people/groups to get started?

No, When I started I went solo. Alot of groups came to me asking if I wanted in, I declined.

What types of sites make the best phishing sites?

Social networking sites, Any site that involves teenagers ranging from 14 years old upwards.

What are the steps you take to set up a phishing site?

I try find a domain name that would best suite the current target. Try find a few similarities which would make my site more realistic. Then, Register it! I then find a reliable anonymouse host. (Offshore are the most reliable) Although, I do tend to use compromised hosting accounts.

Secondly, I view the page source. Then I alter the source code to post the forms information to my pishing site.

Thirdly, I create a php file which will POST the current forms information to a text file on my server. I use the same php file with every site, Just minor alterations are needed since it’s mearly a few lines of php code.

How many people do you typically phish per site you post?

That all depends on the size of the website (the ammount of users) Usually, I pish 30k a day.

How do you monetize the identities and how much does that net you?

Social networking sites, Make me $500 to 1k through CPA deals. 5 times out of 10 the person uses the same password for their email account. Now depending what is inside their email inbox determines how much more profit I make. If an email account has one of the following paypal/egold/rapidshare/ebay accounts even the email account itself, I sell those to scammers. All in all, I make 3k to 4k a day. I only pish 3-4 days a week. Depends on how much time I invest, The more time I invest the greater the outcome.

Are there any costs associated with phishing?

Yes there are costs. A dedicated server, VPN, Network encryption software and time.

What sort of hardware/software do you need to do this? Anything special (phishing kits, etc…)? What kind of internet connection do you use?

For MOST social networking sites, I use a program called MyChanger. You can find it on this website - www.myownchanger.com - This makes pishing so much faster on social networking sites. Everything is automated! messaging/bulletins/comments/profile modifications it’s great. Other than that, I get ALOT of custom programs built to suite my needs from freelance developers. My internet connection isn’t anything fancy, A stanard 1mb adsl line.

How do you keep yourself safe from being caught?

I use VPN’s, Dedicated servers, Proxies and my network traffic is
encrypted. All payments are made through egold.

Are there any anti-phishing deterrents (tools or technology) that make life as a phisher harder?

Oh sure, There are many things that make pishing harder. But since Internet Explorer 7 and firefox 2 have implemented an antiphishing protection, Those two cause the most irritation.

Do you forsee any changes to the phishing industry that are worthy of note?

No.

Anything else you’d like to share/last words?

Lazy web developers are the reason I’m still around pishing.

Pretty telling on the current state of affairs, I’d say. The first interesting point I took from this were that IE7 and FF2 were actually a somewhat okay deterrent. From the looks of it, it hasn’t made much of a dent - only changed the tactics that phishers use. I suppose we could have guessed that since there is no lack of phishing emails in our inboxes, still I found it somewhat surprising that it was making a dent. I actually predicted that it would before IE7.0 launched, but I lost a bit of hope afterwards. Interesting nonetheless.

The second is that the password is used in more than one place 50% of the time - we already knew that but it’s interesting to hear it from a phisher’s perspective on how that’s actually useful to help monetize the attack. A huge thanks to lithium who allowed me to post all of his words. Does it make you re-think that MySpace profile you set up?

Style Injection Phishing

Saturday, May 5th, 2007

This is certainly not new, but I happened across an interesting link to a bunch of phishing sites built into MySpace. Instead of being a normal phishing site that rely on JavaScript injection or email, the MySpace phishing sites rely only on injecting a form that overlays over the page itself. The URL to find these is a simple Google dork.

At the time of writing there were 56 phishing sites on MySpace. Obviously not huge as a percentage, but it’s scary that there are any at all. It’s unclear what they want to do with these urls, however, I spent a few minutes mapping out the URLs used by the phishers:

  • 5 x hur.be
  • 4 x willgle.com
  • 2 x r3voluti0n.com
  • 1 x m3rm.org
  • 1 x spaceadder.info
  • 1 x coolton.dajoob.com
  • 1 x www.profilespider.com
  • 1 x www.itfailz.net
  • 1 x artexstudios.com
  • 1 x members.lycos.co.uk
  • 1 x login-myspace.logindotspace.com
  • 1 x www.googleidols.com

So only 20 were working/alive as I checked. I was able to find one example of the PHP script used (almost all of them were written in PHP). This one was simply wildly mis-configured. A number of them appeared to be old and were hobbled by MySpace who changed the URL to a “..” which had the effect of breaking the script, but the pages were still messed up (as if MySpace pages aren’t already messed up enough to begin with). Pretty ugly.

Vidoop

Wednesday, April 18th, 2007

In an interesting email that was sent to me I was asked to take a peek at a new software tool, not yet released to the public called Vidoop (there is an interesting article on it here). While I was unable to actually take a look at the software, I’ve got a pretty good idea of how it works from the Wired article. After downloading a software certificate that allows you to use their software basically you say, “I like animals” and it shows you pictures of horses and cats and dogs all mixed in with a bunch of non-animal photos. You choose the the correct photos (a la kittenauth CAPTCHA) and you are granted access.

So here are the major problems with this that I see. Firstly, it’s probably not accessible (meaning there aren’t alt tags on the images) because if there were it would take only a few guesses to get in since the computer could build databases of “like” things. So basically, like in kittenauth, the blind are screwed (which we have talked about a dozen times and I really don’t want to start another conversation on it, I’m just sayin’). Secondly, it’s non-portable because you have to have the software installed on the computer you want to use. That means you can only use it from one computer (forget going over to a friend’s house and logging in) and if that one computer gets hosed you need to find an alternate path for getting the software installed (which is often the least secure part of these systems). This type of design is a lot less portable than tokens and for a consumer tokens are nearly unusable too.

Also something that makes me uncomfortable from a security perspective is the concept of single sign-in. I’ve always thought single sign-on was a great usability improvement but often terrible from a security perspective. Like the old motivational adage - you’re only as strong as your weakest link - the same is often true with single sign-on. You are often at the mercy of the weakest security model. If any one site is insecure you can (in many of the cases of single sign-on that I have seen) end up compromising all the other trusted sites. Perhaps Vidoop has a great way to solve that issue that revolutionizes the way authentication works and never opens itself up for attack under any scenario. Without looking at it, there’s no way for me to know.

Lastly, because Vidoop uses a relatively small set of photos to choose from, there are only a few general choices from which to brute force (otherwise you’d run into overlap and false positives). If I know the target is a male, chances are they aren’t going to pick the fuzzy animals. If I know the target is a 13 year old girl, chances are they aren’t going to pick photos of computers or sports cars and so on. Anyway, you see the problems with this, Unlike passwords, which are user specific (and still guessable), this is highly un-arbitrary. Does it stop phishing, keystroke logging, cure cancer or any other magical things? I can’t say without looking at it. Will I be using it for large scale mission critical secure production installs? Doubtful.

McGruff Identity Theft

Sunday, April 8th, 2007

I guess this has been around for a while, but I just recently started seeing it on TV, but the McGruff the Crime Dog campaign is now targeting identity theft. This probably wouldn’t be a big deal except for the way the commercial is worded it sounds like what they are showing is how identity theft works. What they show in the commercial is someone taking a camera phone picture of a credit card. Sure, that would disclose the credit card number and the name and the expiration date, but not a lot more.

Firstly, the amount of crime that camera phone skimming makes up, is got to be fractions of a percent over people swiping numbers out of trash cans at gas stations and restaurants and online identity theft. Secondly, the information you get by only looking at the front of the card is only enough to do certain types of credit card transactions - especially because it’s missing the CVV2 number. Lastly, explaining identity theft in this way is missing a rather huge issue, which is phishing and hacking databases.

While I think it’s interesting to market to kids on ways to spot one form of identity theft that there is no chance of them being able to stop, it’s unfortunate that there are no commercials targeting them on ways to protect their identity online. COPPA laws are interesting but they only apply if you are a scrupulous company. Unfortunately phishers and hackers don’t particularly care about people’s age. I dunno, it seemed like it may be doing more harm than good in explaining identity theft in this way, and misguiding people’s understanding of the real issues.