In talking with the browser companies there seems to be more and more interest in content restrictions. For those of you who don’t know anything about it, let me just quickly give you the overview. Three or four years ago I was trying to find a way for my company to put malicious user generated input into a sandbox but still allow it to show up on the site. The obvious answer was use an iframe to isolate it. That, unfortunately, has all sorts of user experience issues. The first one being that you cannot tell how big it needs to be so you often end up with double scroll-bars which messes up printing, and causes links inside flash movies to only change the iframe instead of the whole page. Yah, it’s ugly. So I started looking for alternatives.
The first was talking about the concept of a re-sizable iframe. There are security implications with that which may allow the parent to know the state of the child, so that was thrown out by the Mozilla team. There may be tricky ways to bring that up but some of the other usability problems are still there so it’s not really ideal anyway. So the best alternative is to create something that tells the browser, “If you trust me, trust me to tell you to not trust me.” This is based off of the short lived Netscape model that said if a site is trustworthy you lower the protections of the browser as much as possible. Content restrictions was born. I submitted the concept to Rafael Ebron, who handed it off to Gerv. It went to the WHATWG, and that’s where it’s stayed for the last 3 years or so.
The Netscape model doesn’t work if the site you trust has lots of dynamic content. So by extending it with content restrictions makes a lot of sense for a few reasons. The first reason is that it puts the onus on the websites to protect themselves. The other is that it doesn’t hurt any other usability, because it’s an opt-in situation. Pretty ideal, actually. While I was talking with Mozilla last week they asked me to put together a list of the top things I’d like to see in content restrictions. They are eager to get started on it, but can’t promise the world. They’d like to hear the top two ideas and then work from there.
How you instantiate content restrictions is still up for debate - whether it be a new header pointing to an XML file, or inline in meta tags or a new HTML tag. I’m a little indifferent, except that I think it should be accessible both to people writing dynamic pages, as well as people who simply include HTML placed there by whatever means (FTP, mail, etc…). So it should probably be a hybrid of a few of those, but that’s a different discussion.
Other possibilities include not creating a new DOM (no iframes, frames, or the like on the page between two places on the page). Another is no automatic redirection that is not user initiated. That’s a common problem because malicious users perform redirection to other domains.
A possible valuable tool for content restrictions would be to be able to limit what sort of functionality is between two sets of tags. The first example would be turning off any HTML tags that aren’t allowed to be rendered. The second would be to limit the event handlers to a pre-defined set (or removing them entirely). I’ve seen a number of situations where this would have been handy as a last resort.
Another thing I have been toying with quite a bit lately is the concept of XMLHTTPRequest. One thing that has always surprised me is that it allows more than it’s name implies. That is, if you request something that isn’t XML it still gives access to the page. It could be up to the page’s digression if XHR has access to anything other than XML. That would limit XHR to session riding, rather than being able to read nonces or other unsavory functions used in worms.
So I’d like to get people’s feedback. Those are some of my ideas, but I’m hoping people will have even better ideas as well. Once I get the top two ideas, I’ll submit those, and we’ll rank order the next several ideas and submit them as supplemental ideas for a later day.