This is the (belated) third of three posts relating to So Long And No Thanks For The Externalities. At least I got them all out in the same decade :-)
First off, the author notes the often-found result that setting a site’s favicon to the lock symbol will help fool users (as will putting a lock in the page body). Although it won’t solve the entire problem, on general principles my view is that (now that tabbed browsing is established as the way everyone browses) we shouldn’t put favicons in the URL bar, only on the tabs. The principle is that the URL bar should be entirely trusted, and not controllable (apart from the contents of the URL, obviously) by the site being visited.
This, incidentally, is an issue with tabs-on-top. It puts the browser-controlled bit of the UI (the URL bar and toolbar) between two page-controlled bits of the UI (the title/favicon, and the page itself). That makes it harder to educate the user about what is trustworthy and what is not. I can’t see any way around the idea that some bits of the UI a user is faced with are trustworthy and some are not (unless you make it all untrustworthy). If there must be this split, keeping a geographical distinction between the two types must help. Logically, I’m a fan of tabs-on-top. But I can see security disadvantages too.
I absolutely agree that a goal of secure website UI design should be to minimise the number of errors encountered, while still maintaining security. Repeated warnings habituate users into ignoring them – although we have made them harder to ignore in recent Firefoxes. I would be very interested in research which looked at whether this has had an impact on the number of bad certificates out there. The trouble is that it’s hard to measure that number, because just because a scanner can’t trace a cert back to its root, that doesn’t mean that the intended users of that website can’t.
Unfortunately, some suggestions people commonly make for eliminating warnings (e.g. “just quietly show an HTTP UI if the cert is self-signed) have security holes at the edge cases.
STS is a great example of a way we can improve security without the user noticing anything different. For sites which use it, it entirely solves the problem raised in Section 5.2, where an attacker can intercept and redirect an initial HTTP request before the HTTPS session is established.
At the end of the section, the author makes the astonishing claim that:
In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.
Even if that were true, if we removed all certificate errors and just blindly trusted any SSL connection, the situation would change somewhat rapidly. They come close to admit this in the following sentence:
Of course, even if 100% of certificate errors are false positives it does not mean that we can dispense with certificates. However, it does mean that for users the idea that certificate errors are a useful tool in protecting them from harm is entirely abstract and not evidence-based. The effort we ask of them is real, while the harm we warn them of is theoretical.
The assertion that the benefit of certificate errors (a side effect of certificate checking) is not evidence-based is the same as the assertion that forbidding guns and grenades on planes reduces the risk of hijack is not evidence based. No-one wants to try the alternative to gather the evidence!
The fact that most phishing sites don’t use SSL is because users don’t look for SSL or site identity. And that’s a UI and education problem we could fix, if the world put its mind to it. After all, most people now automatically buckle up when they get in a car. And now we have EV, we can build a site identity system on rock rather than sand.
He does have an important point, though. I think virtually 100% of certificate errors ARE false positives. At least I’ve never seen a certificate error that I thought was not. As a matter of fact, I just had an experience in which an expired certificate caused me to delay action and to lose some money. While I do wish the certificate hadn’t expired, investigating and ultimate deciding the certificate is (probably?, hopefully?) trustworthy also cost me quite a bit of time.
But the cost in time is only part of it. The average user has no reasonable way of determining the validity of a warning, but sees a lot of obvious false positives and therefore ignores all of them.
I have no idea what to do about it, but the certificate warnings are a real problem. It’s quite possible that they have been rendered almost totally ineffective.
In general, the average user has to do to much and to know WAY TOO MUCH to stay safe on the Internet, and so rationally prefers to stay ignorant.
Yes, almost all are false positives. But that’s because people don’t think it’s worthwhile to try and fool people using invalid certs. Surely that’s a good thing, in one significant sense? And if we removed the checking, they would – we’d get lots of true positives (theft attempts) but have no way to know about them.
Yes, it’s a little like vaccinations. But in this case I think the false positives accustom people to ignore all security warnings. Something needs to be done to reduce the red flags, although I don’t know exactly what.
There are actually quite a few complaints from users about excessive warnings. For example, some complain about the difficulty of deploying a large number of self-signed (or other) certificates for internal users. Users also commonly complain about other types of warnings that may or may not have anything to do with actual security.
I’ve got a couple of bugs open, 433412 and 433422, which might have contributed a little toward clarity and simplicity, but they probably aren’t going anywhere.
The first time I saw a certificate error, it was, IIRC, in IE and as a fresh user of Windows 3.1. At the time, it still made me think twice, then I decided that I could afford to trust the site (a Microsoft site IIRC), and clicked “Accept”. Later (rather soon afterwards, in fact), like many others, I learnt to accept these kinds of “dubious certificates” almost as a matter of course.
Nowadays I’m on Linux, using SeaMonkey 2, a little more certificate-conscious than I used to be, and when Larry pops me a XUL error page for an untrusted certificate it again makes me think twice. But I’ve already seen “untrusted” certificates on some of Mozilla’s own sites (e.g. the site used to test new versions of Bugzilla, and advertised in the banners displayed on top of BMO pages just before or just after an upgrade), and if nothing changes, I think that 3 years from now I’ll trust the certificates which Larry doesn’t just like 3 years ago I trusted (without thinking) the certificates which IE didn’t.
Yeah, certificate errors for me are always either Bugzilla installations, or our development machines at work. Never seen one under any valid circumstances…
I have seen a phishing attempt for Citybank a few years ago, that used the wrong SSL certificate. I can´t remember it exactly, but I think it was mentioned in some security mailing list at that time, and later I found one of those in my spam folder. But I guess it´s not yet widely used.
What more happens is that a site puts some of their content on an external server (CDN network), and they forget to change their SSL certificate too.
Worth pointing out that while the majority of cert errors aren’t problems, most of the ones I’ve seen come up in places where I don’t care /that/ much about whether I’m secure.
Just a few minutes ago I clicked on a link to a bug database that brought up a cert error. I was just going there to read a bug, so I didn’t think twice about ignoring the error. I mean, I wasn’t even signing in to anything!
That’s the case most of the time I see cert errors.
But if I was logging into gmail or entering my credit card info on paypal, you’d better believe that I’d pay attention! Possibly I’m a little more cautious than the average user, but I think that most would pay attention if a warning came up on a site they didn’t normally have issues with.
*Most* certificate errors happen because the date on the user’s computer is way off and so the certs appear to be expired when the are in fact still valid. This frequently happens (the computer thinks it’s 2950 or some jazz) when the little battery on the motherboard runs out of juice, typically once the computer is 5-10 years old. Users don’t, as far as they know, actually *use* the date part of the system time, so they don’t usually bother to fix it, and even if they do it’s messed up again the next time they turn the computer on. Browser developers are unlikely to encounter this situation because they buy new computers much more frequently than the population at large.
And yeah, most of the rest happen because somebody forgot to renew the cert. It’s tempting to suggest that the warning should be somehow gentler if the cert only expired recently, but I’m not sure what that would mean in terms of specific UI.
Not that it matters very much. As I’ve said before, I remain unconvinced that certs as used by https provide any real security to the user, since it’s extremely trivial for the bad guys to get a cert if they feel they need one (yes, even with EV), and the browser does not warn the user if the server coughs up a different cert than last time — which is by far the most important thing to warn about.
> For example, some complain about the difficulty of deploying a large
> number of self-signed (or other) certificates for internal users.
It ought to be possible to solve this one without having much impact on anyone else, simply by providing a mechanism for the network administrator to publish information about a local CA that all the workstations will then pick up and use. There are several obvious ways such information could be distributed: DHCP, DNS, LDAP, or even Group Policy if you’re into that sort of thing. DNS is probably the easiest to implement, because all that would need to be implemented is for the browser to issue (on startup, probably) a query (for some record that would not generally result in recursively asking another nameserver, so either your nameserver has an answer or it says there isn’t one), read out the data, and trust the indicated CA. If I understand DNS properly, no other software would have to change except the browser. And from a security perspective, if you can’t trust your own name resolver to not lie to you, you’re in pretty serious trouble. But like I said, there are other ways to do it too.
> But in this case I think the false positives
> accustom people to ignore all security warnings.
I’m sure they do. What was even worse was the stupid “warn me every time I submit unencrypted search terms to a search engine via http” security option that used to be the default in every major browser until relatively recently. The conditioning (to automatically click away security warnings) caused by that one will be with us for decades to come.
Not that it matters very much. As I’ve said before, I remain unconvinced that certs as used by https provide any real security to the user, since it’s extremely trivial for the bad guys to get a cert if they feel they need one (yes, even with EV), and the browser does not warn the user if the server coughs up a different cert than last time — which is by far the most important thing to warn about.
Any old cert won’t do, of course – it has to be one for the site in question. Having made that clarification: evidence of “extremely trivial”ness, please.
And from a security perspective, if you can’t trust your own name resolver to not lie to you, you’re in pretty serious trouble.
Then a lot of the Internet is in pretty serious trouble, by your measure. DNS is not secure.
Gerv:
Thanks for the detailed comments on SoLongAndNoThanks and the insights. OK, I acknowledge that the line you single out:
“In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.”
was being a little provocative :-) The point I wanted to make is that the user has never seen anything to suggest that the annoyances are there for a purpose. That said, so many of the emails and comments I’ve got have flagged this line that it’s clear I should have worded it better. I completely agree that even 100% false positives doesn’t mean we can get rid of the technology. Guns on planes is a good example, the smoke detector in my kitchen is another. But in both of those cases people have some idea of what the technology is trying to do, and why it’s important. With cert errors I don’t think they do, and that probably lowers the FPs that people will tolerate before mentally giving up. What we should do in the case of cert errors isn’t as simple as the strong password and URL reading stuff; i.e. “just stop annoying people” probably isn’t the right answer. In SoLongAndNoThanks I was (self-indulgently I admit) pointing out the problem without offering a robust solution.
Anyhow, thanks for the discussion. I’m very encouraged that people seemed to find this paper useful.
> Any old cert won’t do, of course – it has to be one for the site in question.
No, it doesn’t.
You just do a redirect to the domain for which you do have a certificate. So many sites do redirects so often, even if the user notices they don’t think anything about it.
An individual site can theoretically partially protect itself from this by closing port 80 and requiring only https ever be used to connect to them (even initially), so that all their users’ bookmarks and habits and search engine results and such will necessarily all be https. But you can count the major sites that do this on the fingers of one hand. Everyone leaves the http service in place, redirecting to the https service, because people don’t type in the protocol (and the browser assumes http if you don’t tell it otherwise).
(And even closing off port 80 still doesn’t protect against phishing, but that’s another matter.)
Ugh. I just realized, it’s worse than I realized before.
Previously I said it was important to warn the user if the cert is different from before. But that only works if most of the users are technically inclined (as with ssh). For https, it doesn’t solve the problem. Still worth doing, but there’s a gaping hole: all the attacker has to do to get around it is use an http redirect.
And then it hit me: it’s even worse than that. All the attacker really has to do is spoof search results from Google or MSN or Yahoo and point the user to their bogus URL instead of the real one. Most users arrive from search engines, even when they’ve visited the site a thousand times before.
So as long as http exists and is widely deployed, https will never be secure for most users.
Jonadab: solving the problem you raise is what ForceTLS is all about.
Even with DNS the way it is, this is not “extremely trivial” in practice (attacking arbitrary targets), nor is inserting HTTP redirects into people’s browsing sessions.
In my view, the problem with that particular suggestion is pretty fundamental, not an edge case. A HTTPS link that I find in a web page or email has the key semantic that I am supposed to access the target “securely”, for some suitable definition of secure. If that isn’t possible, I want the page load to fail; I don’t want to count on myself to notice the absence of the SSL badge.
The problem with the “guns” analogy is that they have been frequently carried on planes, but 100% of the people don’t know it. That is, we are dealing with our fears, not with science, in discussing the question. So the analogy fits, but it doesn’t return an easy result :)
On McCormac’s statement of:
> “In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.”
I used to point out (frequently) that there is no evidence of the aggressive use of a false certificate, and no evidence of harms. I remain surprised that there is still little or no evidence, as we have now plenty of hearsay claims of cafe-MITMS, etc. It should really be an easy thing to survey which cafe hotspots are attacking certs in MTIMs.
This either proves it works without a doubt or fear, or breaching them presents no gain over avoiding it, so the alternate is cheaper than the breach, and the system is wrongly directed … hence phishing, which has documentable and large harms. “Clear and present danger” and all that …
The point being that in order to separate our fears from our science, we need hard data. Which we really haven’t got. So the debate is based on fear, which typically means it isn’t informed by the central issues.