Paid Advertising
web application security lab

Matrix Re-loaded

There is an interesting post over at hackosis talking about using deceptive security models. I’ve always thought this was a good technique in theory. I wrote about it early last year in something called matrix as a security model, wherein you confuse the attacker by giving them completely different results.

I’ve also written about it on Darkreading regarding widespread use of blacklisting having the effect of causing hackers to become better. The problem of how to deal with an attack may be a better problem for evolutionary biologists to solve than computer scientists.

15 Responses to “Matrix Re-loaded”

  1. Shane Says:

    Thanks for the featured post, and I really think — if implemented correctly, this could be the next generation of information security.

    Glad to know that someone else was also thinking about such a thing. Look for more info in the future and possibly some work on a VMware implementation proof of concept.

  2. Ix Says:

    Interesting concept, which will definitely change the face of security once it can be implemented correctly. Going from wall building (and hoping it stops the barbarian hordes) to almost Cat and Mouse security, with the hacker thinking they’re the cat finding the mouse (a vulnerability), but really they’re the mouse finding the bit of cheese the cat left as a trap.

    Look forward to seeing more about this in the future, especially a PoC.

  3. Matt Says:

    Sorry RSNake, bu I will have to disagree with you on this one. I think the real problem we will have to tackle before we can even hope of implementing something like this is to get developers to think with a security mindset as well as a development mindset.

    Once we can get our development teams to proactively design in security to their applications, then we can start playing cat and mouse games with attackers, but how effective would this really be?

    The problem I see, as always, will be recognizing an attack from valid input which is not always easy. I personally believe that stopping an attack and returning nothing is always better than trying to return some fake output. This only adds additional logic and bloat to an application to fool some attacker.

    Validate everything, generate “appropriate” error messages when possible, and log properly. Keep security as simple yet robust as possible. Adding logic to try and fool an attacker is contradictory to both, and the output may not be good enough to fool anyone which will just tip them off that they are getting caught.

  4. RSnake Says:

    @Matt - Sorry, which part do you disagree with me on? That it’s interesting or that we are helping bad guys get better by blocking them or something else?

  5. Matt Says:

    I disagree that code should be added to an application to generate “fake output”. I have enough trouble trying to convince some of our developers that javascript can be executed in a hidden field, because “it is hidden so you can’t see it” (real quote from a senior programmer).

    I think the true problem today is that developers are not always required to think about security when designing and implementing a system. They seem to be concerned only with business and functional requirements.

    In my opinion, adding more code to attempt to stop something a large number of developers do not understand anyway is not the way to tackle the issue.

  6. RSnake Says:

    @Matt - I never said that they should add code. It could be done in a WAF or IPS as well. And to play the devil’s advocate, what if the developer does understand the problem and has already protected the system as much as they reasonably can in other ways? Should they still avoid this technique?

  7. LF Says:

    (chapter 1, paragraph 18): The Art of War

    All warfare is based on deception.

    This is not a new concept, just something different to apply it to.

  8. Matt Says:

    I will concede that I did not think of a WAF or IPS, but if the developer has already done as much as they can I do not see that either of those solutions will buy you much more in the way of security.

    Lastly, this technique would be atrocious for in house black box security testers, because in a perfect implementation we would not know what to report to developers to fix as vulnerabilities and what was generated by the application because it detected the attack. This could make us better security experts, but the cost/benefit just doesn’t add up for me.

    By the way, I loved your talks at the OWASP Conference in San Jose and your latest book.

  9. Spyware Says:

    If I understand this correctly, this Matrix model provides an obscure barrier between the “real” website and the hacker. It’s an interesting idea, really. But why obscure/confuse hackers if you already spotted them? It seems like you need to deduce before this can have any effect. Somehow be forewarned about a hacker, as it were. You have to spot him before he attacks, how? You can only use the power of assumption for this, and that’s not a flawless power. It has been proven Regex fails to spot everything and everone. And if you blacklist, just throw them off the website, why keep ‘em busy with a fun game? Not that I don’t fancy a fun game or two though.

  10. Shane Says:

    @Matt - it is not about the application code, it is about the system implementation. No offense, but I feel that people are ‘just not getting it’.

  11. Shane Says:

    @Spyware (sorry for the double post RSnake) - It is very easy to detect a Hacker, simply by the power of all possible inputs. There are very limited ‘valid’ inputs and many more ‘invalid’ inputs. Anything that is not valid could be redirected to ‘The Matrix’.

    And BTW - nothing has ‘flawless power’.

    Look for more in the future on this, it seems that trew (commented on my post) has some good idea how to get a PoC going.

  12. OneKnock Says:

    This is a very interesting concept, and I think I see a way that it can be taken to a whole new level.

    Let’s suppose that a hacker tried to exploit a web application and is 100% successful in his/her attack. How about coding some protection into the application so that it will respond in a way that makes the attacker think the attack failed?

    For example, let’s say the hacker attempts to inject some SQL into a parameter to drop a database table, and the parameter is in fact vulnerable to this injection. Why not return a message to the attacker that makes it look like the attack didn’t work - for example “invalid input, please try again”. This way the hacker will not gain any satisfaction from the exploit and will hopefully find another site to play with and not repeat the exploit. Plus the hacker won’t go and tell his friends about the vulnerability because he won’t realize the attack actually worked.

    What do you guys think? I’ve never heard of anybody securing their web app to this extent, but I really think this could be the new wave in web application security.

  13. Matt Says:

    @Shane - I have reread your article again after a few days, and I will agree that in theory this is a cool idea, but my point about in house security testers still stands.

    If your system is implemented correctly, how will your in-house application security team know what to report to developers as real vulnerabilities versus what is generated by ‘The Matrix’?

    Essentially, you are caught between choosing parts of an application to test. If you choose to give the app to the security team without ‘The Matrix’ then you are not fully testing the app, and in particular the part that is supposed to save you from hackers.

    If you run the tests through twice, once without and once with ‘The Matrix’, you have just doubled your testing period for each release cycle. Maybe I am biased because I live in the security world of a large corporation where you do NOT risk the release date of a critical app by doubling test time. I don’t know.

  14. Spyware Says:

    @Everyone and Shane; How does the system divide hackers and typo’s/5-year-olds-messing-with-a-keyboard? Shane, you suggest the use of a whitelist. If it’s not valid, the system “goes Matrix”. But invalid inputs aren’t necessarily hack-attempts. I see this idea fail at the attack-detection part, it’s just impossible. I think.

  15. Travis H. Says:

    I just wrote on this recently in my “security concepts” paper.

    It makes sense to me. I know when I was a kid what teenage hackers feared most was the trap-and-trace on the line.

    It’s probably best to do this where you CAN’T fix the problem; i.e. where users may choose easily-guessed passwords, etc. And you can’t really rely on blocking people - it may work, or they may find another open proxy to scan from.