Cenzic 232 Patent
Paid Advertising
web application security lab

Iterative Scanning

I’ve been involved in a few scanning projects over the last few years, most of which, I have not been super happy with, because they don’t take into account one thing that humans do - they don’t learn from what they find. I guess there is two classes of scanning. There’s the “noisily scan everything and look for all possible signatures/holes” route, and there is the “quietly look for relevant signatures/holes”. Let me give an example.

Let’s say after a few requests I learn that the host is an IIS server. Do I really need to run exploits that only affect Apache? Or let’s say I know that the server doesn’t support PHP, do I need to be scanning for PHP vulnerabilities in some obscure PHP server? Understandably that’s a quantum leap above where most scanners are, as they typically aim for the noisy scans that request everything under the same.

There are two reasons I would prefer the second type of scan personally. The first is that it greatly reduces the time required to perform the scan, and two it greatly reduces the signatures against the server. It also reduces the chances of an inadvertent DoS, slightly. Now the question is, since you are avoiding certain requests, is it more accurate to say it has none of those vulnerabilities, or is it more accurate to say it is only some percentage clean/scanned? Perhaps it’s better to simply alert the person that only some of the scan was completed based on the information returned, and then allow them to continue the scan if they absolutely must. I’m just not sure where the value is in that.

20 Responses to “Iterative Scanning”

  1. hackathology Says:

    Rsnake, how about this? If you could develop something or use existing open source scanner and incorporate a function that checks for the web server first or scripting language used first before the scanner does the proper scanning? Well, i do agree with you that most scanner performs the noisy scans.

  2. Awesome AnDrEw Says:

    That’s pretty much what I was going to ask. Why not first identify certain variables to discourage it from looking for things that don’t exist? It’s like walking into a rehabilitation center for quadriplegics, and looking for the next Olympic high diving team - it just doesn’t happen.

  3. hackathology Says:

    yep, if some tool like this can be developed. This will be great. It will save a lot of time and scans for other scripting language. I am not sure if enterprise tool does this, but yes, it will be great if it can be integrated into an open source scanner.

  4. takuan Says:

    I believe SPI Dynamics wrote a whitepaper describing these so called “intelligent engines” a long while ago.(well, last year)
    http://www.spidynamics.com/assets/documents/Intelligent%20Engines.pdf

  5. zeno Says:

    SPI has had this capability for over 4 years now. For those unaware I worked R&D at SPI.

    - zeno
    http://www.cgisecurity.com/

  6. Jungsonn Says:

    That’s why I never use scanners, it’s to noisy and these days your’e easely detected with IDS’ses. The thing is, the best scans are still done by hand ( in the case of portscanning) it’s always a good idea to stay under the radar as much as possible, and not show up in the logs with 65000 nmap portscans ;)

    But hey, for who are tools build.

  7. hackathology Says:

    Jungsonn, i do agree with you in certain degree that pentesting with hand is good, but you might miss out some parts. Scanners is good but if you can integrate scanners and hand, that would be the best combo.

  8. nEUrOO Says:

    But well, this is stupid to run a scan with all possible options on a website! the pen-tester should configure it depending on the server/apps he wants to test.

  9. n00k Says:

    Well … I do not completely agree with that approach. All you have to do to not get scanned properly is to fake the information the scanner uses to determine if you use this or that plugin or whatever. So when implementing such “intelligent engines” one has to use only information that is not spoofable and I don’t know if thats an easy task or even doable for some information.

  10. RSnake Says:

    n00k, that’s possible, but why would people modify the signatures rather than fix the problem? I don’t see that happening much in reality, beyond perhaps server signatures, which I don’t think is a reliable detection mechanism anyway.

  11. rebel Says:

    I’d be more than happy if this method of scanning/fuzzing would be more popular. I usually avoid sending headers with server / script language informations and the like as far as possible for this very reason.

  12. hackathology Says:

    Yup, i agree with Rsnake. I would rather fix the problem than modifying the signatures.

  13. jk Says:

    In response to your question, it would be more accurate to say that you used several industry-accepted methods to determine the version of the web server, and then tested for the vulnerabilities associated with that version. So, if you determine that it’s IIS, you can state that only IIS vulnerabilities were tested. I’m sure there’s some logical proof that you can go through that would back up the statement “an IIS server has no Apache vulnerabilities”, but I don’t know if you want to go that far.

    As far as determining the version, use httprint (or code similar logic into the scan tool) . The results of that will determine what tests you will perform next. Of course, it never hurts to verify the results of httprint by telnetting to the server directly and issuing the appropriate commands, or you could also run some of the other webserver fingerprinting tools out there (webserverfp, hmap, 404print, etc.) and compare the results.

  14. Ivan Says:

    ^ That is ok, but there is some cases where people fake signatures and than scanners will fall.

    I have some example where one my client request from admin to fake signatures, and when I do a job he tell me that information (it is good that I do only webaplication blackbox testing).

    And following this way I can say that is best to try some social engenering first, and than temper our scaners ;)

  15. cail Says:

    Um, whisker did this years ago (and was the first to do it, IIRC). It would detect the server type, and only run the run the appropriate scan sigs against it.

    It’s been depreciated in favor of Nikto, and I don’t believe Nikto emulates this functionality. So it’s something that was slightly lost to time.

    But it’s definately something that’s been “done before”. However, when it comes to risk assessment and compliance to regulation, not evaluating a risk because “you think it doesn’t apply” doesn’t get you off the hook. So, better safe than sorry…as in, better to scan for the superfluous stuff and ensure they’re not there rather than miss something which may be completely obvious.

    Plus, most scanners give you the capabilities to enable/disable various scans/checks/sections of checks. So adjust to what you need ahead of time in your scan settings.

    Letting a tool decide to skip stuff on your behalf, without you knowing exactly what it was skipping, could be a larger can of worms to deal with than having to sit around a bit longer and wait for the scan to finish.

  16. Jordan Says:

    hackathology, Rsnake: You guys might find it easier to simply patch the application, but in many cases the guy running the webserver isn’t necessarily the guy running all the applications on the server. It’s not at all uncommon to find admins who think they’re clever by mis-reporting the server information.

    I’m totally with n00k on this one. Unless you’ve got some foolproof way of fingerprinting, why not try them all? There’s only one scenario where you really care about that that I can think of: you’re a bad guy who really doesn’t want to get caught and is extra-paranoid, in which case you’re going to be super stealthy, obfuscated, etc, to the point where even if you tried more attacks it would just slow you down but hopefully none would be detected anyway.

    Good guys don’t care if they’re a little bit slower most of the time if they know their results are going to be better. Remember, the bad guy just has to find one hole to win, the good guys have to find /all/ the holes to keep winning. So the good guys want to test every single possible vuln just to make sure.

    What about a scenario where a webserver front-end passes traffic to a different backend. It’s not at all uncommon to have an apache or squid proxy acting as a load balancer or ssl terminator, or what have you. So not running IIS vulns might miss a vulnerable server hiding behind it.

  17. RSnake Says:

    @Jordan - Oh, I agree that there are two schools of thought - one is try everything and one is try only what is absolutely necessary to get a good idea of what is running (mostly for reduction in server load). However, I’m not talking about doing any one test and putting your hopes on having that one test come back as valid or invalid. I’m really talking about duplicate testing, but not much more than that. I’d never put anything on a single request anyway, unless I were absolutely sure they are the type of customer who are so lacking in technical expertise that they are incapable of changing server signatures. But for the most part there is a valid point in not doing a full blown volley of requests when you know the server isn’t vulnerable to it.

    And rather than going through and tuning it to work or not work like that, wouldn’t it be nicer to have two option boxes? One to simply say “full scan (compliance)” and the other “fast scan (high level vuln assessment)” Not that I think the full scan would even give you better results if the fast scan was built properly, but it sure would let you sleep at night.

  18. Jungsonn Says:

    @Jordan, spoofed ports are easy to detect, just a matter of experience.

    To read the server banner is easy, it requires next to nothing. it’s an amateur approache to let loose the scanners on something you didn’t probe first. Obviously, if you wanna hack something you take care, and generate as little noise as possible. So scanners are a NO-NO in my eyes. anyway, who wants to know if port 5540 is open, it useless. many vulnerable ports are in the low range, then, and only then if you can’t find a few you can scan a couple of high ranges, even still one have to take care.

  19. Jordan Says:

    @RSnake: It sounds you’re taking the approach that assumes you’ve got a fool-proof fingerprinting method. If you really have such a thing, then as I said before, go for it, but I wouldn’t put a lot of work into it since if the fingerprinter is wrong (and its confidence level is wrong) then you’re going to waste time checking the wrong things.

    @Jungsonn: I’m not sure what you mean about spoofed ports? I was talking only about web applications on standard ports over http.

    Incidentally, another reason not to trust fingerprinting: http://www.port80software.com/products/servermask/?vid=3741498

  20. kishfellow Says:

    @Rsnake, I agree partially with your blog post, and IIRC SPI, eEye, and others are following it for a good number of years now, as in whisker, as pointed out by some cool poster ;)

    Just telling it’s “nothing new”, just less used.

    @Jordan, your approach of masking won’t help because if you’ve even been talkin about port 80 for regular web-apps, there are almost 50% of websites which have “http methods” enabled (i mean the unwanted and not-so-obvious, but potentially dangerous ones… )

    You can get a full list of http methods scanned by some tool called metoscan, or you can do it by hand if you’re old skool.

    To clarify your query, there are around 25-30 reliable “fingerprinting” methods, and last but not least , no HR/management can fake the hardware/software present in the client environment, when he posts for a job opening.

    Anti-reconnaissance can only be basic and nothing more, it’s just humbug…lol !

    Cheers :)
    Kish

    PS: j0hnny l0ng, and google rule (unless u don’t know google hacking :)