Paid Advertising
web application security lab

Breaking out of HTML constructs for cross site scripting

Today, I spent about 30 seconds looking at Dean Brettle’s NeatHtml page that is designed to sanitize HTML to remove XSS. Well at the moment I think it’s broken, so it’s probably not much of a valid test, but the live demo is designed to show what is and is not possible. Of course, the second or third thing I tried was to end the textarea that the text was being displayed in and pop open an alert box. Voila!

But it got me thinking, there are a number of HTML constructs that don’t allow HTML in them to be rendered as HTML, but rather only as plain text. Common ones that I’ve seen in the wild are textarea, title, comment, and noscript. To the casual XSS penetration tester, these can be easily glossed over, unless you view source and see in which context the HTML has encapsulated the information.

It’s actually very easy to break out of these, assuming HTML is allowed. Just because the title tag is created dynamically and it’s encapsulated by title tags, does not mean that it should be considered safe. Honestly, I don’t think webmasters are even thinking about this issue, or if they are, they are unaware of how it actually works.

This, to me, points to a bigger issue with quality assurance testing in web application security. In a previous job I’d always be super frustrated to find that the application worked great unless you entered a quote or a tab or some other random char, and then all of a sudden you’d end up with a application with serious security issues. I think fuzzing is part of the answer. I think every application should at one point or another send every single charachter in the ASCII char set through the application to see if it has intended results or not.

The major problem with that is a lot of these problems end up being how the browsers themselves render the content, not how the application serves it up. Don’t believe me? Look at the XSS Cheat Sheet and see how many vectors affect both Firefox and Internet Explorer. Very few overlap actually. In fact, when I am testing vulnerabilities I have to test each vector no less than 5 times: once for IE, once for Netscape in IE mode and once in Gecko (because the way it handles URL translation is different than both IE and Firefox), once in Firefox and once in Opera.

What is lacking is a browser that understand all five DOMs and also how they interact with the user. Some of these vectors don’t fire unless there is some interaction with the user. Some require clicking through alert boxes. Some send the browser into infinite loops. All of this makes testing extremely difficult and time consuming. Automation is a real problem in XSS attack detection. A few web security consultancy firms that I’ve talked to make a blanket statement that if any HTML is allowed to be injected, it is considered vulnerable. I’ve chewed on that one for the last year or so, and I think it’s half right.

Of course there is the scenario where you have a whitelist of usable tags with no allowed parameters (like the <BR> tag for instance) that couldn’t really cause harm. If you add a style tag and it’s all over of course, but the simple stand alone <BR> tag is pretty harmless. Does that mean that allowing an unknown HTML tag like <XSS> makes it vulnerable? Well of course not, because that’s not a valid tag. So is it a valid test? I’m still uncertain, but maybe what it does point out is that that particular application needs more testing.

So rather than having a binary auditing position on whether HTML injection of any sort makes a vulnerable application, maybe it’s a hueristics flag that makes it suspicious. Ideally any state change of the application in the case of SQL injection for instance (or how the browser reacts to the returned data in the case of XSS) is suspect. Anyway, food for thought.


10 Responses to “Breaking out of HTML constructs for cross site scripting”

  1. Dean Brettle Says:

    Just stumbled across your post today. Thanks for taking a look at NeatHtml and finding the hole. Turns out this is not a bug in NeatHtml but is instead a bug in Mono (but not .NET). For details, see:

    I’ve worked around the bug for now, so feel free to try to find more holes in NeatHtml.

    BTW, in case it wasn’t obvious, the HTML you enter will be rendered below the textarea (assuming NeatHtml blesses it). Also, you should know that at the moment NeatHtml doesn’t attempt to prevent HTML “vandalism” (e.g. very large or inappropriately positioned content). It only tries to prevent script injection.

    Thanks again!

  2. RSnake Says:

    Hey, Dean, glad to see you got a workaround for that. Looks like you’re properly HTML encoding angle brackets. I wasn’t picking on you, per se, but rather how these types of vulnerabilities are constructed.

    But about the second point, that is an issue that I have spent an insane amount of time thinking about and I’ve only come up with a few clever ideas to help solve it, none of which I am in love with. Positinging content using styles (which by the way, should never be allowed, given all the CSS tricks out there) is an easy way to create issues for the website author (positinging over buttons and redirecting the user off host, etc…). But more than that, it’s bad for the brand to end up with a pink website when your colors are clearly blue, etc…

    It’s an interesting problem that I’d love to hear thoughts on if you ever think of anything. Of course, I’ll probably just blow a hole in it, but at least it gives us something to talk about where right now, I don’t have anything left to talk about. CSS wrapping, iframes, JavaScript wrappers, all have issues. Big ones.

  3. Dean Brettle Says:

    I don’t feel picked on at all. I’m quite glad you found the hole actually.

    As for vandalism, I agree its a big issue. I think we should probably split off spoofing attacks (i.e. HTML+styles that tricks the user into thinking that the content is actually part of the hosting site), and consider them as a separate issue.

    Now that I look more closely at the schema NeatHtml is using, I think it might be sufficient to prevent spoofing attacks. It only allows a subset of style properties to be set, and not all values are allowed. For example, the ‘position’ property is not allowed and the ‘margin’ property can’t have a negative value. I *think* the restrictions should be sufficient to prevent user-provided HTML+styles from getting “outside-the-box” enough for a spoofing attack. Please let me know if you can find a way around it. To make things easier for you, the regexp that inline styles must match can be found at the bottom of the following file:

    As for non-spoofing vandalism (e.g. big text, ugly colors, etc), there are a couple options. First, you could use a more restrictive schema. One that didn’t allow colors to be set or one that restricted font-size, etc. You could even disallow all inline styles and force users to use only HTML and restricted set of “class” values to style their content.

    The problem with using these approaches with the current version of NeatHtml is that, right now, NeatHtml only accepts or rejects the HTML. That means that if the HTML includes something that was copied and pasted from some other page (e.g. when reporting errors to a forum) the whole post might be rejected because of some innocent but prohibited style in the page the HTML was copied from. In a future version of NeatHtml, I hope to give NeatHtml the ability to strip out tags, attributes, and maybe even style properties that it doesn’t recognize. That would allow the content to be “accepted with corrections”, if you will. :-)

    Another way to prevent both vandalism and spoofing might be to render the HTML within an HTML element that has been styled in such a way that it’s size can grow but only up to some limit and it’s content is explicitly clipped at it’s boundaries. I’m not sure, but I suspect this is doable in browsers with sufficient CSS support. This would allow for the attacker to make a mess but only on the portion of the page that the site admin allowed.


  4. RSnake Says:

    I’ll have to spend more time looking at NeatHtml to tell if it is vulnerable to that, but if it follows the ridged syntax of the regex in that URL, it looks pretty rock solid. I’m really glad to hear you aren’t trying to strip out HTML though. I know it’s a terrible user experience to have lost all your changes (maybe there’s a way to do it in JavaScript so that it doesn’t submit to the server as a way to help that bad user experience). However, it’s a worse user experience to have your info stolen so given the alternative… (although I did notice, that I was unable to even get a few normal non malicious tags to render because they weren’t in proper xhtml format - a pretty restrictive policy to be sure).

    Restricting styles to classes is an interesting method, I hadn’t really thought about. Of course, you’d have to be careful you didn’t have a style that they could use against you (based on it’s position), but yes, I think there is something to that. The major flaw with that is that they basically have to understand what style attributes mean what. That basically requires them to learn your style of HTML (similar to my problem with phpBB codes - it’s just a matter of obfuscation). For this to be really good, it should not hinder user behavior at all, except when what they are doing is invalid. NeatHtml goes a long way to fixing the issues with user input, but it does also represent a pretty big customer experience hurdle.

    I’ve actually seen your last point in action once before. It works pretty well, actually. Of course, it is highly limiting, as you would expect (and I did find one hole in that solution), but it does a pretty good job. It’s worth experimenting with though, certainly.

  5. Dean Brettle Says:

    I expect NeatHtml to be used primarily with a client-side WYSIWYG XHTML editor like FCKEditor. That helps ensure proper xhtml format, and I think it can also be setup to set class attributes appropriately so users don’t need to know what style attributes mean what. To prevent attackers from using a class against you, you just wouldn’t include it in the list of values that NeatHtml allows for the class attribute.

    As for stripping the HTML loss of desirable user content, I think the best approach is to store the HTML exactly as entered and then run it through NeatHtml immediately prior to display. That way, the original HTML could be available via some other means (e.g. as source in a textarea) for the user or an admin to edit, if necessary. Using NeatHtml immediately prior to display also allows the most recent schema to be used. That ensures that any security fixes or usability improvments apply to both old and new content. If it is too expensive to run NeatHtml everytime the content is displayed, caching techniques could be used. For example, the NeatHtml-filtered HTML could be cached in the DB and the cache would be invalidated when the schema is upgraded.

    On the render-in-a-restricted-area idea, was the hole you found fundamental to that idea or what it just a hole in the implemenation you saw?

  6. RSnake Says:

    I have heard some compelling arguements for parsing before display, and I have heard a lot of performance arguments saying you can’t do that on each display. So I’m torn. From a security standpoint I think a lot of vectors (the more obscure ones anyway) rely on the surrounding text, rather than being a stand alone vector (especially when it’s malformed HTML). If you were very strict about what you allowed in being correctly formed, that shouldn’t be a problem, but most implementations I’ve seen let those sort of things just pass right on through.

    The not allowing the class statement you made… does that mean that you blacklist certain classes, or you whitelist the allowed classes (I’m thinking it would have to be the latter, but I want to make sure I understand). That would work pretty well. That’s part of the implementation problem I found in that site I was referring to, but it also allowed absolute positioning to the right and left (not up and down) which still allowed me to overwrite navigation links on the left navigation. I haven’t tested if you can bound the information on the horizontal axis, but I should think it’s possible.

  7. Dean Brettle Says:

    NeatHtml requires well-formed XML. Well almost. To accomodate some common non-well-formed HTML idioms, NeatHtml actually “cleans up” the HTML before validating it against the schema. The output includes those changes, so the HTML that is actually displayed is what was validated, even if it isn’t exactly what the user provided. The cleaning up includes:
    -> where needed
    & -> & where appropriate
    lowercasing of tag names

    At any rate, I think NeatHtml is strict enough that it shouldn’t be vulnerable to vectors which rely on surrounding content.

    As for class attribute values, I was proposing a whitelist approach. The NeatHtml schema would just be modified to list the allowed values.

    FYI, I’ve updated the NeatHtml demo to render the HTML in a div that has been styled to (hopefully) prevent the user-provided content from “escaping”. I’ve only tested it with Firefox 1.5 so far. Let me know what you think!


  8. Dean Brettle Says:

    FYI, I just fixed the div styling to work in Windows IE5+. I’ve also verified that it works in Firefox 1.0, Opera 7.54, Opera 9, Netscape 6.2, Netscape 7.2, Mozilla 1.7.12, Safari 1.2, and Camino 1.0. The only browser I tested that it didn’t work in was Mac IE 5.2.3 (which even MS doesn’t support anymore).

  9. web application security lab - Archive » CSS Security Says:

    […] So it really depends on what you are trying to stop.  If you are simply trying to stop XSS, even that can be a nightmare.  Firstly, you have to keep the content on the page, so the @import function and link tags must be rejected.  Next, you need to sanitize the information by removing erroneous characters and comments, blah blah before detection.  Then you need to search for injection points, like expression and url and reject those.  Also, make sure you have made your page in a character encoding like UTF-8 so you don’t run into UTF-7 or US-ASCII issues.  Are you then safe?  Honestly, I don’t know.  As CSS evolves, the chances of that being the only way to instantiate JavaScript and VBScript are pretty low.  Just when I think I know all the tricky ways to get JavaScript on a page I find out one more that blows everything we knew out of the water.  But for now it may work. To stop CSRF you are have the same rules as above, but now you need to remove any function that can include a remote image.  Fortunately, this also happens to work the same way as above.  Because if you can include a remote image, you can include a JavaScript directive, so you may actually be okay if you remove the XSS above.  No promises though, CSS is not finished adding features. To stop overlays, you need to reject positioning tags.  That can be a mess, but I believe it’s possible.  Both absolute and relative positioning are risky.  There maybe ways to wrap the information in tags to reduce the risk of positioning tags.  See the comments in this old post where Dean Brettle and I discuss CSS wrapping for some ideas. Lastly, to remove the rest of the branding issues, one trick is to throw the content in an iframe so it does not have access to outside the frame in question.  Outside of that, wrapping may work for some things, but definitely not changing the scrollbars or something equally annoying. […]

  10. web application security lab - Archive » BeEF XSS Exploitation Framework Says:

    […] Jeremiah Grossman sent me an interesting link yesterday to BindShell’s tool they released called BeEF. This is an interesting take on a problem I’ve had for ages - how do you test the effectiveness of exploits against multiple browsers? Typically I have to test all browsers against the cross-site scripting vectors one at a time. It’s tedious and error prone. […]