I’ve been involved in a few scanning projects over the last few years, most of which, I have not been super happy with, because they don’t take into account one thing that humans do - they don’t learn from what they find. I guess there is two classes of scanning. There’s the “noisily scan everything and look for all possible signatures/holes” route, and there is the “quietly look for relevant signatures/holes”. Let me give an example.
Let’s say after a few requests I learn that the host is an IIS server. Do I really need to run exploits that only affect Apache? Or let’s say I know that the server doesn’t support PHP, do I need to be scanning for PHP vulnerabilities in some obscure PHP server? Understandably that’s a quantum leap above where most scanners are, as they typically aim for the noisy scans that request everything under the same.
There are two reasons I would prefer the second type of scan personally. The first is that it greatly reduces the time required to perform the scan, and two it greatly reduces the signatures against the server. It also reduces the chances of an inadvertent DoS, slightly. Now the question is, since you are avoiding certain requests, is it more accurate to say it has none of those vulnerabilities, or is it more accurate to say it is only some percentage clean/scanned? Perhaps it’s better to simply alert the person that only some of the scan was completed based on the information returned, and then allow them to continue the scan if they absolutely must. I’m just not sure where the value is in that.