Cenzic 232 Patent
Paid Advertising
web application security lab

Chrome Fixes STS Privacy Issue

I’m always interested in finding ways to leak privacy information out through browsers. For those who aren’t aware of it, there’s a new technology called “Strict Transport Security” or STS for short that pins a browser into using SSL/TLS for all further connections with the site in question. The goal of the tool is to reduce the risk of tools like SSLStrip that downgrade you to using HTTP instead of HTTPS. However, there was a somewhat bad privacy issue that was created as a result of it:

Imagine a scenario where you have one website that a user is interacting with (say an evil advertising empire intent on tracking people for marketing purposes).

On that SSL/TLS enabled website there are a series of iframes. Each iframe leads to different HTTP (not HTTPS servers). The first iframe (call it iframe00) is the “check” to see if the user has been to the site before. It automatically redirects the user to the HTTPS site via 301 redirect. The fact that the user hit the HTTP site at all means they haven’t been there before which brings us to the first use case:

Use case 1) If the user has not been to the evil website before (which can be found out because the user will hit the HTTP version of frame00 before being redirected to the STS enabled SSL/TLS version of that subdomain), a series of iframes will selectively turn STS on and off on each subdomain. Those subdomains will essentially provide one bit of information. Collectively that maps to a user profile in the database. For instance frame01 = STS, frame02 = HTTP, frame03 = STS, frame04 = STS … could map to binary 1011 = decimal 11 or the 11th user to visit the site. The number of iframes required is based on the total number of users that the site believed it would need to track over it’s lifetime or 33 iframes total (which would enable enough bits for the ~1.7bn internet users).

Use case 2) If the user has been to the site before they will not hit the HTTP site on frame00 (the “check” website) and they instead are immediately sent to the STS site, the evil website can begin to calculate who that user corresponds to. By setting every frame to the HTTP sites (not STS enabled SSL/TLS sites) and seeing which ones instead go to the HTTPS site, the evil site can map those bits back to the corresponding user that the site has seen before.

This is one of those unfortunate examples where it’s a good idea that introduces another security flaw. The fix isn’t great though - as it basically helps reduce the effectiveness of STS in the first place, by making it easier for the user to clean out. The whole point of STS is to pin the browser to a secure connection. So either that’s important to do, or it isn’t. If it isn’t, STS shouldn’t exist. If it is, then it shouldn’t be cleaned. Either way, I don’t think STS is going to provide a lot of value without some more thinking. But for now, it’s a good chance for companies to play with a new way of securing their site from man in the middle attacks. Firefox is planning on implementing this soon as well. Overall, I was pretty happy with how Google handled the bug, and fixed it, along with the dozen or so other bugs that were reported to them during the bug hunting contest. Hopefully, Google will continue increase their diligence around privacy issues in their products in the future.

8 Responses to “Chrome Fixes STS Privacy Issue”

  1. Andre Gironda Says:

    Michael Coates reported this a few months ago — http://michael-coates.blogspot.com/2010/01/chrome-supports-strict-transport.html

    Also, check out this website/blog/post about STS — http://www.thesecuritypractice.com/the_security_practice/2009/12/new-rev-of-strict-transport-security-sts-specification.html

    I highly suggest you add them to your RSS feeds!

  2. RSnake Says:

    @Andre - I followed the link, and I didn’t see anything about the privacy flaws in STS. Are you sure that’s the right link and/or are you sure that we’re talking about the same thing?

  3. Andy Steingruebl Says:

    FYI - we had extensive discussions during the design phase of this spec about exactly this privacy concern. We came to the same conclusion, that any fix involves a tradeoff between privacy and security. Our own view was that security should win-out, but obviously there needs to be a way for a user to clean out their browser. Maybe it should be harder than cleaning the regular cookie-store though. A debate that I’m sure will continue in the future.

  4. Matt Says:

    Could it make more sense to put the SSL requirement into DNS(Sec)?

    Optionally you could include some details about the certificate in there that must match.

  5. RSnake Says:

    @Matt - no, a malicious advertiser would control his own DNS in many cases just like they’d control their website and their SSL certificate(s) necessarily to pull off the attack described above. It might be slightly less likely that they’d have access to DNS too but in most cases it wouldn’t add an additional layer of defense.

  6. Matt Says:

    @RSnake - I didn’t have knowledge of it at the time of my post, but the concept I was trying to suggest was something like “HTTPSSR.”

    There is a blurb about it in section 5.4 of https://crypto.stanford.edu/forcehttps/forcehttps.pdf

    The draft is at http://lists.w3.org/Archives/Public/public-wsc-wg/2007Apr/att-0332/http-ssr.txt

    If you move the state out of the browser and into DNS that would solve the issue, correct?

  7. Arenlor Says:

    A bit late to the conversation but isn’t STS essentially:

    ServerName www.arenlor.com
    ServerName *.arenlor.com
    Redirect permanent / https://arenlor.com/

    or am I confused as to how it works?

  8. RSnake Says:

    @Arenlor - it is essentially that, but the difference is that a man in the middle can remove that redirect and proxy the SSL/STS connection over HTTP. A man in the middle can’t remove STS - or at least that’s the principle - because it’s a flag that once set resides in the browser, not the server.