Cenzic 232 Patent
Paid Advertising
web application security lab

Detecting MITM/Hacking Proxies Via SSL

There are several different ways for MITM/hacking proxies to handle SSL. They can create a self signed root cert that the attacker/user accepts once, they can do a per site snake oil cert, or they can simply downgrade the attacker/user to HTTP (a la Moxie’s sslstrip). Any of those work, and it’s kind of a matter of preference and circumstance as to which is better. But what if I’m running a site and I want to see if the user coming in is using a hacking proxy? There’s a few techniques to do that.

First of all there’s really not all that much you can do within SSL itself to create more than binary options (there are some exceptions to that rule, and I’ll post about that later) but those binary options are actually just enough. Let’s say I have several sites. One of which is a banking site. The others just have something as simple as a tracking pixel on them. Firstly, the time difference between when the user pulls the SSL certificate and actually instantiates the site might indicate whether they are going directly to the site or if they had to take some time to accept a self signed-per site certificate (a la Burp Suite).

Now if the MITM proxy uses a standard root signing certificate one of those sites with the tracking pixels on them can use the same standard root signing certificate (since it’s sitting right there in the tool and can probably easily be ripped out and re-tasked to be used on the banking’s tracking pixel site) to sign it’s own SSL session. If the user pulls it down anyway, even with the mis-match error, you know they are either using or have used that particular MITM proxy.

Another pixel might be protected by a snake oil SSL certificate. If that image is pulled anyway, despite the mis-match error, there is a good chance they are using something like an sslstrip or something like a root signing authority. Because the image is pulled and it shouldn’t be, you know there must be something off here.

Lastly, you can have another completely valid SSL signed site. If they are using something like Burp Suite it will give them a certificate mismatch error and it won’t pull the image (at least not immediately), even though it should. Although the image may get pulled eventually, as the hacker goes through the annoying manual process of adding the cert in or okaying it, the time frames will be so great compared to pulling images on the same site that it should be obvious that they aren’t an average user.

Of course these techniques have strange effects on certain browsers, like the iPhone Safari browser as I talked about before. But if the user is claiming to be one of the common standard browsers, this technique should work - although I’d test it thoroughly before deploying it.

4 Responses to “Detecting MITM/Hacking Proxies Via SSL”

  1. Rogan Dawes Says:

    Hi Robert,

    Interesting post. For what it is worth, I’ve been working on a new intercepting proxy library (OWASP Proxy) which could easily be configured to work around these detection techniques.

    Your “firstly”: This is not actually how the proxies work. The connection is first made by the browser to the proxy, without ever touching the actual site in question. If a self-signed cert is presented by the proxy, the user can then choose to accept that cert. The proxy then receives a fully formed request which is then relayed to the server. The server only ever sees a completed request made by the proxy without any delays at all.

    Your “retasking the tool’s root signing cert”: OWASP Proxy has code to generate “semi-legitimate” SSL certs for each site that is visited. It uses a CA cert to sign per-site certs as and when required. In this way, users can install the CA cert, and never be warned again about conversations being intercepted by the proxy. OWASP Proxy generates a unique custom CA cert on first use, in order to protect users who do choose to install the CA cert in their browser. This feature has also been added to WebScarab (available through the “nightly build” only at the moment).

    This code was also donated to Burp Suite, and was released in a recent update.

    As an aside, most tools (e.g. WebScarab) install a non-verifying “all-trusting” TrustManager, and wouldn’t care which CA cert you used to sign your pixel tracking site’s certificate anyway. They simply accept all certificates regardless, to support things like self-signed certs on internal apps.

    Finally, your approach of looking for a delay while a browser accepted an invalid cert, or that pixel never being requested at all is a good one. A feature that I plan to add to the next iteration of WebScarab (to be based on OWASP Proxy) is an alert when the proxy first connects to a site using an invalid certificate. This would then mimic the timing of acceptance that a normal browser does anyway, and would also diminish your ability to detect use of the proxy. Of course, I don’t plan to make the process as onerous as say Firefox’s, so there would still be a difference that might be detectable.

  2. RSnake Says:

    @Rogan - very cool! I’m glad you guys have been thinking about this too. I think the most useful feature of this sort of detection is actually for detecting sslstrip and other proxies that the user didn’t intend to use, but yes, it would be good to mimic exactly what’s going on in the browser when you’re doing assessments.

  3. c Says:

    I’m not certain you can rely on the delay as a sign that the browser is using an SSL proxy. For example, if the browser is configured to check a cert revocation list, that would probably produce a similar delay.

  4. sanbar Says:

    I have recently install a transparent SSL Proxy (MITM Proxy) using which I am able to intercept and view usernames and passwords in plain text of any SSL enabled site. Can I consider this finding as a serious risk coz I am not able to find any site wherein they have some different cryptographic algorithm like salted MD5. What is the industry best practice?