I had a mad idea last week, which I shared with the NSS team. The fact is that some companies want to monitor everything going into and out of their network. And, my view is, as it’s their network, it’s their right legally, and it’s OK with me morally too as long as everyone using the network is aware of it.
However, the current SSL trust model makes this MITMing of all connections very difficult (which is a good thing, in many ways). Companies such as BlueCoat sell boxes which will MITM SSL connections and log the data, but browsers will complain that the auto-generated certs presented are not trusted. Companies are supposed to deploy their own root to all endpoints – but this is a massive administrative hassle, particularly for mobile devices. As we have found out anew recently, this creates an incentive for trusted CAs to sell trusted intermediate certificates to these big companies. However, such certificates could potentially be abused to silently MITM anyone.
So my mad idea was that Firefox should have one cert in the root store for which the private key was published. However, when an SSL connection occurred which chained up to that root, the browser would bring up an irremovable red infobar which said: “Your connection is not private – all data transferred is being monitored by X”, where X was the O field from the intermediate cert being used. (We would require the use of exactly one intermediate.) If the O field was empty, it would say “by Unknown Attackers”, or something equally scary.
This week I found Phillip Hallam-Baker of Comodo proposing something very similar on the “therightkey” mailing list:
What I find wrong with the MITM proxies is that they offer a
completely transparent mechanism. The user is not notified that they
are being logged. I think that is a broken approach because the whole
point of accountability controls is that people behave differently
when they know they are being watched.
I don’t mean just changing the color of the address bar either. I
would want to see something like the following:
0) The intercept capability is turned on in the browser, this would be
done using a separate tool and lock the browser to a specific
intercept cert root.
1) User attempts to connect to https://www.example.com
2) Browser throws up splash screen for 5secs stating ‘Your connection
has been intercepted’
3) Business as usual.
The splash screen would appear once per session with a new host and
It should show the interception cert being used as well.
Phil’s point 0 rather defeats the point – if you had to reconfigure the browser, then companies would just add their own root. But if it were built in by default, his point 0 is not necessary. He is right that you’d need a splash screen or confirmation step – we can’t sent initial data or cookies or anything until we know the user knows they are being MITMed, and gives permission to continue.
What do people think?
I already find it questionable enough when businesses do something like this. But you shouldn’t limit the discussion to businesses – countries like Iran and China will be extremely happy about this feature. The notification you mention actually provides extra value for them, it sends the message: “there is no privacy, Big Brother is always watching you”. And you might know that this message contributes more to the effectiveness of the Great Chinese Firewall than the censorship itself. Are these really the organizations you want to help?
IMHO, MITM should *not* be simple. It should *not* be easy to set up. It should *not* be easy to eliminate the loopholes. Then whoever considers to use it (be it against own employees or against own citizens) will always have to think twice whether it is really worth the effort. And then you will only see MITM deployments done by the most radical (or should I say paranoiac?) companies/countries. If you make it simpler you risk seeing it deployed in the more moderate environments as well, definitely not a goal worth purchasing.
This means we could see it on the update ping, right? Could we always display the infobar if we know there would be such an MITM proxy there, even for HTTP? Or would that make the whole thing too awful, UX-wise?
And furthermore, how do we let people turn this off? An always present red bar is pretty annoying. You could make a whitelist for some values of the O field, but then of course people could create their own certs that also pretend to be that corporation/organization, and then really MITM people without being detected…
These points taken together means that if you had no warning for MITM on HTTP, and then an unremovable red bar for HTTPS, that’d create a user incentive to NOT use SSL, as it doesn’t complain about the insecure connection, “so it must be more secure”. That doesn’t sound so attractive to me. :-(
If we see an MITM on the update ping, we’re not necessarily always certain that it’s total. So I don’t think displaying a warning for HTTP would be the right thing.
I would suggest it’s not possible to turn it off. Remember, on a corporate network, it wouldn’t appear for internal sites if they had normal certs. And you’d want it to appear for every external site.
As for your last point, websites which want their users to be always secure shouldn’t be available over HTTP :-) Then users can’t switch to it.
This feature merely makes more obvious what is currently happening. I see that as a win for transparency and user choice. If there is no privacy in China, then your browser telling you so is a good thing, not a bad thing.
I don’t see MITM in companies as companies using it “against” their employees; they are trying to protect their own data, which to me is fair enough. If you don’t like their network policies, don’t access personal sites over their network. Use your phone instead.
It’s additional surface area. The entire concept is dependent (amongst other things) on that one string being completely impenetrable.
Would it, for example, adequately deal with a right-to-left Arabic control character – or would it be possible to push the string off the end with whitespace and replace it with your own message like “Congratulations – Firefox security is active!” (silly example).
What happens if the intermediate you create is called Google and you’re visiting google.co.uk? The bar pops up with: “Your connection is not private – all data transferred is being monitored by Google”.
‘Well!’ the user exclaims in an annoyed fashion: ‘that’s okay, I’m visiting google.co.uk!’
As with all backdoors, it’s essentially writing a security hole in your application and then hoping that as technology, time and knowledge moves on your code will adequately prevent a real attacker from abusing that security hole. The idea is fundamentally bad security practice.
This doesn’t even begin to look at scenarios like over-zealous security measures which might dilute the effect of the message by accident or a regression which allows MITM attacks without the message (such as a chrome update).
In addition user education regarding validation, certificates and MITM is a huge task as it is. This would add a whole new layer of confusion for users.
This is on top of the moral implications of just handing the keys to the castle over to dodgy companies/states to let them do whatever morally dubious thing they want rather than actively trying to prevent it.
So, in summary, whilst an innovative concept I think it should remain just a concept for now.
Just use the already-present colour indicator in the awesome bar (blue for Certs, green for EV Certs, etc.): Make it red (or something else). Plus a modal / splash screen for every browser (i. e. upon first MITM request after browser restart) and/or host (i. e., display the warning per newly opened website, e. g. facebook, Google, …) session (the latter should be configurable, with per-host as the default).
Whilst it may be your reasoned opinion, I’m not sure this doesn’t directly contradict the Mozilla Manifesto (particularly 5):
“4. Individuals’ security on the Internet is fundamental and cannot be treated as optional.
5. Individuals must have the ability to shape their own experiences on the Internet.”
I don’t think Firefox needs to make this use case any easier. Firefox already has a MITM warning: the certificate warning page, which stops the user from proceeding unless dealt with. Corporate users who want to intercept all traffic should create their own internal CA certificate, sign certificates for MITMed connections with it, and tell users on their network to install that certificate to allow traffic monitoring.
That solves the problem without also opening up a hole for people to get MITMed on any random connection just by not paying close enough attention.
If creating their own internal CA and installing it on clients were easy and practical, they’d all be doing it. In these days of heterogenous endpoints such as phones and tablets, many of which run OSes such as Mac OS or Android where there aren’t good group deployment tools (and the story’s not yet awesome on Firefox either), deploying a cert infrastructure like this is _hard_. And when they do it, you end up with people not being notified of their being MITMed, which is worse than if we did it the way suggested above.
I repeat, I don’t think we need to make this use case any easier. If people want to MITM all traffic on their network, they certainly can, but I don’t think we should specifically add mechanisms to make it easier. That implies endorsement of the practice, rather than simple acceptance of the possibility.
Standardizing a private key has several other serious security problems, as well. Anyone who can run a traffic sniffer on the network can decrypt all encrypted traffic as well (other than encryption using ephemeral keys with forward secrecy, which hasn’t become widely used yet). Corporate users almost certainly don’t want that.
In addition, if people get used to seeing the MITM warning, they’ll start ignoring it in other settings where they shouldn’t. Having each prospective snooper create their own key for users to add ensures that users subject to MITMing on one connection will not get MITMed on another connection, because they only accept the one key.
That said, I do think it might make sense to have a mechanism for end users to flag a certificate as used for MITM connections when adding it to their locally accepted certificates. That way, users will get an appropriate warning infobar or similar when a connection uses that certificate, and future enhancements could support a required click-through before reaching the site, to ensure that users don’t unintentionally expose more of their data than they intended.
If you think about it, there is really nothing different between this new cert and self signed certs. Everyone using a self signed cert right now could just use this new special cert instead. So this would be equal to just replacing the self signed cert error page with a red infobar at the top. However there are good reasons why browsers make it very hard to click through the self signed cert page, because it is needed in order to protect against threats other than your “acceptable MITM”.
The organizations will just have to deal with distributing their own CA cert. If Firefox APIs make it difficult to automatically import such certificate through the normal deployment tools used by organizations in Windows environments, you could fix that.
I’d rather have signal monitoring as part of the protocol *and* only for session keys: draft-nir-tls-keyshare-00. I can see why *some* companies need to monitor traffic, but certainly not all of them.
I don’t believe that most organizations doing this appreciate transparency like you, Gerv. If we gave them this feature, I imagine that Mike Kapley or one of those guys would provide enterprises with a simple off switch for the user notification part and we’d be right back where we started with opaque user monitoring and tracking. Even if deployment managers didn’t turn it off, users would want to.
Asa: if enterprises were able to deploy such a fix, they could also deploy their own roots, or (for that matter) deploy an extension which disabled the self-signed cert warning. If the eavesdropper can modify your browser, it can appear how they like anyway. But at the moment, even if an organization _wants_ to be transparent, it’s hard. They can put stuff in their network usage policy, but who reads that.
Also, it could be that other browsers go with this solution, including mobile browsers – which are much harder to extend or update than Firefox.
Yeah. OK. That makes sense to me.
I approve in principle of a “this connection is overtly monitored by X” mode for the specific cases where it’s unavoidable (monitoring for disclosure of confidential information seems to come up a lot) but I think that the tactic you suggest is going to enable attacks that are worse than the status quo. Anonymous upthread is correct to say that this would enable a rogue device on the monitored network to eavesdrop on all traffic through the proxy; there might be a TLS handshake mode that defends, but I’m not sure, and I suspect people would not configure their MITM box correctly anyway. I can think of two other attacks: first, anyone can sign an intermediate cert with this key, so there is nothing to stop anyone masquerading as the “legitimate” overt monitor on a particular device. Second, an attacker with access to network infrastructure could place a second man-in-the-middle to intercept traffic from a target that does this interception; the attacker’s red bar would be masked by the usual red bar.
I am also concerned about this being yet another security UI element that users will learn to ignore. If I’m used to seeing this red bar on my work laptop, and then it comes up again when I take it to a coffee shop while I’m on vacation, I’m probably not going to notice that the label is different now.
Off the top of my head, I don’t know what a better solution is, but I think we might find one by also thinking about the existing capabilities of proxy autoconfig and how those might be extended, and the closely-related problem of “captive portals” on wifi networks.
I like that, except for the server opt-out, which is designed such that every TLS server in the world has to be upgraded to comprehend the new messages before the protocol is deployable to clients. It should be designed such that only server admins who want to opt-out have to upgrade.
Philosophical addendum: I am not convinced that such monitoring is ever ethically acceptable, but I think that that needs to be addressed politically rather than technically; right now people would rather break the security of the existing system than accept its design intent, we don’t get anywhere useful by continuing to design systems that can’t cope.
Guess there is always a way to mess up the already badly broken CA-Web-of-trust idea of SSL/TLS even more.
I value that you introduce this proposal as a “mad idea”. Let me add that it is not only mad but also outright dangerous, as proven by countless abuses of technology by companies and “free world” governments alike.
I’m not sure I’m convinced that for people under regulatory supervision who require this, rolling out a private CA for a limited number of MITM boxes is that hard. If you’re in that situation, you’re probably already using management tools to roll out software, and audit what’s running.
I’m pretty sure from what I’ve experienced as a user that rolling a root cert out for IE on a managed Windows network, or for iOS is not hard. Whether that’s true for Firefox (desktop/mobile) is a very different question, and where I’d focus any engineering effort.
As for transparency for users, as is noted above, for trustworthy institutions, the employees will know because they’ve been told, and the knowledge is one click away by looking at the cert issuer. If it’s not one of those institutions, then regardless what you do, the institution can make the browser look secure.
It is a little tempting to make custom root look a little different in the URL bar to remind people who use Firefox at home and at work/wherever which level of privacy they have. Not sure if that’s worth the effort though.
Good technical points, and more of a good reason that this won’t work than any of the philosophical arguments I’ve seen :-)
Interesting point that I hadn’t considered. Perhaps it’s the case that the manifesto says “Internet” specifically – I suspect Mozilla does not want to be in the business of mandating what owners of private networks can and can’t do.
At the moment, individuals’ security is being treated as optional because they can be silently MITMed – either by a trusted CA selling an intermediate (although we plan to stop that) or by a company root cert. At least with this plan, they would have knowledge of what was happening.