Paid Advertising
web application security lab

The Virtues of WAF Egress

I’ve been thinking about this for a long time, and I haven’t seen anyone else talk about it, so here goes. As anyone who’s read this site for any length of time knows, I’ve never had much of a soft spot in my heart for web application firewalls for various reasons: cost, false positives, false negatives, can fail to fix the problem completely, etc…. However, recently I’ve been asked to look at more and more people’s technology (as part of our consulting practice) and decide what I do and don’t like. I was talking to a new startup a week or two ago about their new WAF appliance that they are building and all the virtues, blah blah. Of course I had to start punching holes in it the second I heard the concept, “What about DOM based XSS?” The CEO replies, “No idea.” Hurray, I win! Or did I?

The problem with DOM based XSS is that it isn’t actually sent to the server (in most cases - and those are the cases I’m interested in for this problem). Generally those fall into anchor tags like, The anchor tag is not sent to the server (as we found during the UXSS in Adobe reader in Firefox). Try as they might, all the King’s horses and all the King’s WAFs couldn’t write rules to block it inbound, all they could do is modify the mime types to deliver the payload more securely. There were lessons learned there, but let’s focus on DOM based XSS in JavaScript, rather than in client side technologies for a moment.

Knowing that we can’t stop DOM based XSS at the server level because it can’t see it, nor can it be seen at the network level, short of auditing the code, all we really can do is wait for someone to exploit it and then re-write the code. But therein lies a problem with all patching. In many cases it’s deathly slow. So why not set up an egress filter on a WAF to programmatically change the JavaScript in transit to be safe? Think about it, if you know you have an insecure function on a page, but it will take 6 weeks of building tickets, fixing the bug, testing, deploying, or a ten minute change on a WAF, that’s a huge value in time savings.

There’s one more advantage to this - if you can set the egress filter to deliver specific content to specific IP addresses, you can actually use the WAF filter to test the changes you are suggesting to your developers ahead of time. Often security guys know how to make the changes better than anyone, so if they can deploy the change in the WAF, test it themselves before it rolls out to anyone else, they’ve reduced the risk of a global deployment of code that failed the fix the problem, or worse yet, code that completely breaks the functionality of the site in some way.

There are other virtues, like if you see database strings that clearly shouldn’t ever appear, like ODBC errors, you could completely block the output, etc… Don’t get too excited, I’m still not on the WAF bandwagon, but I’m starting to see more interesting applications for it, above and beyond a simple short term inbound patching mechanism.

8 Responses to “The Virtues of WAF Egress”

  1. Ivan Ristic Says:

    Yes, there is a lot a WAF can do with outbound traffic. I’ve been thinking about it for some time now but, unfortunately, haven’t made any significant progress implementing it. For example you could detect JavaScript code in places where it is not expected, look for weird HTML/JavaScript code indicative of attacks, remove external links, and so on.

    While I’d like to think full support for DOM manipulation on the server will be available at some point, today, in ModSecurity 2.2.x (development version available for download) we support content injection, where you can inject stuff at the beginning and at the end of the page. I built this feature to make DOM XSS detection possible. The idea is to inject a chunk of JavaScript to analyse the request URI from inside the browser to detect attacks. Have a look at the presentation I gave at the recent OWASP conference in Milan - (content injection mentioned on slide 30).

  2. Computer Guru Says:

    While this is a good idea in theory, I don’t think it would actually work.. Simply because JavaScript is such a loose, get-away-with-murder kind of language.

    Code that’s harmful on one browser is perfectly safe on another. If you choose to protect against EMCA JS, any browser that runs a single line not in the EMCA spec is no longer protected. Protecting against certain content will break other stuff.

    Honestly, with scripting languages in general and JS especially, there is never going to be a one-click cure. Developers just have to be incredibly careful, because there is no other way around it :S

  3. Computer Guru Says:

    Clarification: even with DOM interpretation of code and analysis of the outcome, you’ll still get stuff that doesn’t excecute the same outside of the realm of EMCA.

    Take a look at the Dojo ShrinkSafe utility/Rhino framework. It uses a damn-good DOM interpreter to analyze and re-create JavaScript code in a smaller package. The only reason it’s safe to use is because it doesn’t actually use any different functions, only re-structure the code.

    I’ve played around with the Rhino engine and used as an actual re-parser to re-create JS code based on what actually gets done, and though that works great for bog-standard JS in Rhino, the output rarely works as expected in any other browser/JS-interpreter.

    (/me loves work as a desktop software author :))

  4. ntp Says:

    brilliant! i’ve said what you are saying here before a few times (*), and even in an owasp presentation or two. my primary concerns with waf’s: i don’t like putting “code in front of code” or just randomly throwing a security appliance or application proxy into an environment, and for “security” reasons as well as “availability” and “performance” reasons. some also say that better reliability adds up to better security; and i would certainly agree to this point.

    security appliances are typically some of the least secure devices on your network… in a similar way that security features in programming languages are not, by default, secure. they usually have all sorts of nasty capabilities for adversaries built right into them, such as ability to capture packets and replay them differently or become a MITM (and have all the tools one would want to be nefarious without even installing anything).

    if your organization suffers from the occasional syn attacks, reflected attacks (whether tcp, udp, or icmp) such as bang, smurf, etc… or even http resource starvation… waf’s are just going to hurt you if used on the inbound. plus - the actual content is what you’re trying to protect… your own content… for your website… that’s all outbound traffic last i checked! blocking anything on the inbound could also cause application problems, compatibility issues, instability, performance issues, and the list goes on. try being an operator and troubleshooting network/application path issues with 2 firewalls, 2 load-balancers, 2 waf’s, 2 xml gateways, 2 ips devices, 2 network-cache proxies, 4 routers, and 6 switches. it’s not fun! utm does not solve this problem!

    moreso, layering code on top of more code would seem to result in further flaws/bugs. network-based IPS has shown to be a failure to the industry, and the just-in-time patching capabilities are most of the time just not. why would waf’s be any different?

    finally, if anybody knows anything about how vulnerabilities work, you’ll note that one signature or anomaly cannot be prevented universally. when a vulnerability is found, the exploit payload is normally only one way of potentially thousands to get to that particular vulnerability - iow: it can be exploited in many different ways. javascript can be compressed, encrypted, obfuscated, encoded, etc - and so can all the other scripting languages. heck, encodings and cookie parsing can affect waf’s and certainly have in the past in similar ways that network intrusion detection was defeated long ago: because the IDS has its own parsing issues and the protocols themselves can be done in many different ways (and then try to compare the relative strictness of TCP to the wonderful world of HTTP headers, forms, and cookies in billions of applications).

    finally, which threats do waf’s stop? owasp t10-2007 a1 and a2 only? lame! that means that many other threats are still completely viable, not to mention `death from a thousand cuts’ style vulnerabilities.

    (*) here’s something I may have said [off the record] concerning waf’s:

    5) Your recommendation about using web application firewalls?… c) Thumbs down (15%)… d) Profane gesture (10%)…

    “My exception: If inbound/outbound traffic from a hosted web application inside of a datacenter can be split so that inbound traffic (GET’s, POST’s) is unaffected by the WAF, and outbound traffic (Server responses) is protected - then I’m ok with implementation of a WAF. See: be conservative with what you send and liberal with what you receive…”

  5. ntp Says:

    i have two finally points, so somebody please make fun of me before my blood boils over more about waf’s

  6. Kanatoko Says:

    I think the same thing about detecting ODBC error messages ( and PHP error messages as well ).

    Those two rules work very fine :)

  7. Jeremiah Blatz Says:

    Dear Mr. Snake,

    As your doctor*, I am concerned about the dangerously elevated Kool-Aid levels in your blood stream. I cannot say if these levels are due to excess consumption of Kool-Aid, or if they are due to some imbalance of humors. In any case, immediate action is required.

    What you are proposing is an end-run around corporate release schedules, ticketing, QA, approval, etc. Furthermore, you are proposing a web application development framework that transforms one HTML stream into another HTML stream. While the former may be appealing, it certainly is not something that can last forever. The latter may be acceptable in certain cases, but is really a poor way to develop web apps.

    I shudder to think of the rats nest of functionality that would evolve, if application functionality were split among two different systems with two different developer bases. Double-encoding of JavaScript variables when the developers fix the problem is just the beginning. Inbound WAF filtering is “okay” because it is a transformative function on user input, and humans can understand that. When you get to treating code (and HTML + JS are certainly code) as data, most developers start to glaze.

    This is, of course, a slippery slope argument. As long as everyone agrees that it’s a slippery slope, and that nobody wants to go where it’s heading, it’s fine. As long as everyone is vigilant for false positives, it needn’t break applications. As long as the WAF folks and the developers coordinate releases, it needn’t break applications. As long as problems are fixed in the app as quickly as possible, it needn’t devolve in to a spaghetti system that’s got as many holes as the original system. As long as it’s used sparingly and effectively, it can avoid the overhead of release schedules.

    That’s a lot of “as if”s, though.

    * I am not a doctor

  8. RSnake Says:

    Hahah, Jeremiah, I appreciate those caveats and I share them. It’s a risky proposal. But theoretically that process would be backwards built into the process, so that the SOC shared that information with the developers, QA, and business owners, et al. True, it’s spaghetti, and true there is a high likelihood of creating more problems than solving. But I have never seen any WAF rule as a long term solution, but rather a short term one while you fix the real underlying problems that would allow that exploit to work in the first place. Hopefully it’s one of those situations where the spaghetti unravels itself - but yes, there’s a good chance of abuse of this kind of thing. No kool-aid here, I’ve got my brain firmly in reality, and I’d never sell this kind of thing as “the answer.” It’s just one virtue to an inline device I haven’t heard talked about much.