A company or ISP is worried about crackers. So it installs a firewall which blocks all ports except the ones for services it wants people to use, such as 80 (HTTP) and 443 (HTTPS). It then institutes a complicated policy and procedure for getting new ports opened, complete with risk assessments.
What’s wrong with this picture? Isn’t this just good security practice?
Well, users don’t want to just use HTTP, and application developers want people to be able to download and install their applications. So along comes HTTP tunnelling. Everything gets wrapped in HTTP and a bunch of hacks to make connections appear persistent, and passes over port 80 – or encased in SSL and passed over port 443, where firewalls can’t see what’s going on.
Of course, HTTP is a stateless, transaction-oriented protocol, and not designed for this sort of use. As some outrageous hacks have proved, you can tunnel practically anything over anything else, but the performance always degrades.
So, the entire point of ports is circumvented, and we end up with applications which perform worse on the client and take more resources on the server, and security which is no better. Everyone loses – network admins, users and application developers.
So what’s the conclusion? Default-deny firewall configurations are not actually more secure, because their end effect is to cause users and applications to bypass the system. And, particularly with tunnelling via SSL, there’s nothing an admin can do about it, assuming they want HTTPS to keep working.