If you don’t care about drama, skip this post, there isn’t any new information in it.
Somehow I always end up being the center of controversy, even when I’m really only vaguely interested in the subject matter at hand. This time it comes from the Full-Disclosure mailing list which is known for, among other things disclosing zero-day exploits in applications. My only problem with Full-Disclosure has been the noise, as it’s unmoderated and although humorously belligerent I generally don’t have the time to pay much attention to it anymore. Anyway, I’ve read in a few places now that people are concerned with Larry Suto’s paper on web scanning depth analysis.
First let me put some rumors to bed here. I am not paid by NTO to use their tool. They let me use it for testing purposes because they actually care about making their product better. I have given similar help to three other scanning vendors as well. This shouldn’t come as a surprise to anyone, as I’m part of the NIST.gov SAMATE Web Application Vulnerability Scanner Team and the WASC Web Application Security Scanner Evaluation Criteria Team even though Anurag keeps forgetting to put my name on the site.
Also, Andre Gironda mis-read a quote from me regarding the tools I use for testing. The part he read was that I use NTOSpider. The part he either glossed over or failed to understand were these words:
A better question would be which ones don’t I use!
This is by no means an authoritative list of all the things I use in fact, I’ve written a number of tools that I don’t discuss, and it’s certainly not all the commercial scanners I have access to (most companies I deal with don’t want me to discuss my relationship with them for whatever reason - fair enough). Tin foil hat wearers beware - there is more to me than one or two scanners, and I would hope after the ungodly amount of times I’ve talked about it people would understand that I’m not really all that convinced scanners are all that great in the first place, except for automating some of the time consuming tasks (like crawling, for instance). Aside from a handful of commercial scanners, I have access to and use pretty much everything (and even if I haven’t had access to the scanner, I’ve probably seen the results from the scanners from various clients who sent me the results). None of it, in my mind, stands out as the hands down winner in all categories. Each have their good and bad parts. If people really want me to start doing reviews about why each are good/bad in their own rights, please send me a request to do so with access to whatever scanner you want me to test. I’ll be happy to oblige, time permitting.
Next, let me talk about the actual topic at hand. I was not involved in the technical aspects of how Larry Suto built his test environment. I was aware of the paper as it was being written, as were some of the scanning companies from what I was able to ascertain by talking to a few of them. That said, I didn’t question his methods - and in fact, I wasn’t that interested in them (I said as much in my post actually). I was and continue to be far more interested in the premise, rather than the result. Anything can be fixed, but it’s nice to have a baseline by which scanners can test themselves - whether they chose this particular environment, or another, is outside the scope of what I care about.
So let me reiterate because I think people really took this whole thing and blew it way way out of proportion. The part of Larry Suto’s paper that I thought was interesting was the concept of looking at how well a spider can crawl a site. He may not have done a great job setting up the test sites, you may question configurations, or the types of sites, or whatever you like - again, I’m not interested in that part… at all. What I am interested in is the concept - which is that if you cannot locate the page the exploit resides on, it doesn’t matter how good your exploitation engine is. Here’s what you should get out of that post, and nothing more: crawling effectiveness matters. How you chose to measure that is your own religion. I’m not saying the inverse isn’t true, EG: if you can’t exploit it, it doesn’t matter how good your crawler is. I’m just saying if you haven’t thought of the crawling depth metric you probably should.
Anyway, enough drama already! I’d suggest, for those of you who worry about alien abduction, if you have a problem with me, email me already, and I’ll try to quench the voices in your head. Lastly and most importantly, happy new years to those using the Gregorian calendaring system!