Cenzic 232 Patent
Paid Advertising
web application security lab

Microsoft XSS Library is Pretty Good

I’m actually impressed. I hadn’t had an opportunity to look at Microsoft’s Anti-Cross Site Scripting library (version 1.5) until today. It’s pretty cool. To save anyone time in looking at this it takes anything outside of the normal A-Za-z0-9 ASCII range and changes it into it’s HEX HTML equivalents in an HTML environment and a \x00 type JavaScript encoding for the JavaScript environments. It looks pretty bullet proof because they encode everything that could feasibly be used for malicious activities.

Although I’m pretty impressed by the variety of tests that this succeeds in stopping, I’m still not certain that this is the right place to be fixing this particular issue. To me this has always felt like a browser issue more than an application issue to solve, because there are lots of different ways to execute this type of issue, beyond server side code that a developer produces, including web server vulnerabilities and DOM based XSS, et al.

Anyway, it’s still cool. Big thanks to Kyran for letting me have access to test this in a real world environment.

6 Responses to “Microsoft XSS Library is Pretty Good”

  1. ntp Says:

    here’s the blog where you can ask questions to the team inside Microsoft that developed this library:
    http://blogs.msdn.com/ace_team/

    If you are an ASP.NET web developer and still using Server.HTMLEncode() from the .NET Framework, you should consider moving to this library as it does a lot of the input validation correctly and efficiently.

    Use the HtmlEncode() function whenever displaying data in the HTML context.

    Hello,

    Use UrlEncode() function whenever assigning user-controlled strings to attributes within an HTML tag.

    >

  2. Thijs van der Vossen Says:

    Doing this really sucks when you’re handling non-latin text. If you use this you’ll no longer able to view source and read Russian or Japanese text and you’re going to greatly increase bandwith requirements.

  3. Kyran Says:

    Yeah, it seems to be great. But, as you mentioned something needs to be done elsewhere. Lazy, sloppy programming was the problem in the first place.

  4. Edward Z. Yang Says:

    Microsoft’s approach is great for lazy programmers who can’t be bothered to specify a character set for pages they output. But for everyone else, the method is inefficient and not friendly to international users.

    For example, if you’re using UTF-8, each of these entities as a clearly defined byte sequence that represents that character: the two are interchangeable. You get strange character encoding bugs when there are bytes lying around that are not valid, but as long as the string is well-formed UTF-8, there’s no ambiguous behavior. The only difference is size and readability!

    For other character encodings, the function should be optimized to use the direct encodings of the characters supplied by the 8-bit character set, and hex encode everything else.

    But, of course, this is only given that people use an explicit character encoding…

  5. ed Says:

    I don’t get it, this vuln has been on the MS site for fricking ages http://msdn.microsoft.com/library/default.asp?url=//ha.ckers.org/images/stallowned.jpg

  6. ed Says:

    There’s also this amusing XSS vuln from MS… Pot/Kettle/Black…

    http://s5h.net/u?46