This is the (belated) third of three posts relating to So Long And No Thanks For The Externalities. At least I got them all out in the same decade :-)
First off, the author notes the often-found result that setting a site’s favicon to the lock symbol will help fool users (as will putting a lock in the page body). Although it won’t solve the entire problem, on general principles my view is that (now that tabbed browsing is established as the way everyone browses) we shouldn’t put favicons in the URL bar, only on the tabs. The principle is that the URL bar should be entirely trusted, and not controllable (apart from the contents of the URL, obviously) by the site being visited.
This, incidentally, is an issue with tabs-on-top. It puts the browser-controlled bit of the UI (the URL bar and toolbar) between two page-controlled bits of the UI (the title/favicon, and the page itself). That makes it harder to educate the user about what is trustworthy and what is not. I can’t see any way around the idea that some bits of the UI a user is faced with are trustworthy and some are not (unless you make it all untrustworthy). If there must be this split, keeping a geographical distinction between the two types must help. Logically, I’m a fan of tabs-on-top. But I can see security disadvantages too.
I absolutely agree that a goal of secure website UI design should be to minimise the number of errors encountered, while still maintaining security. Repeated warnings habituate users into ignoring them – although we have made them harder to ignore in recent Firefoxes. I would be very interested in research which looked at whether this has had an impact on the number of bad certificates out there. The trouble is that it’s hard to measure that number, because just because a scanner can’t trace a cert back to its root, that doesn’t mean that the intended users of that website can’t.
Unfortunately, some suggestions people commonly make for eliminating warnings (e.g. “just quietly show an HTTP UI if the cert is self-signed) have security holes at the edge cases.
STS is a great example of a way we can improve security without the user noticing anything different. For sites which use it, it entirely solves the problem raised in Section 5.2, where an attacker can intercept and redirect an initial HTTP request before the HTTPS session is established.
At the end of the section, the author makes the astonishing claim that:
In fact, as far as we can determine, there is no evidence of a single user being saved from harm by a certificate error, anywhere, ever.
Even if that were true, if we removed all certificate errors and just blindly trusted any SSL connection, the situation would change somewhat rapidly. They come close to admit this in the following sentence:
Of course, even if 100% of certificate errors are false positives it does not mean that we can dispense with certificates. However, it does mean that for users the idea that certificate errors are a useful tool in protecting them from harm is entirely abstract and not evidence-based. The effort we ask of them is real, while the harm we warn them of is theoretical.
The assertion that the benefit of certificate errors (a side effect of certificate checking) is not evidence-based is the same as the assertion that forbidding guns and grenades on planes reduces the risk of hijack is not evidence based. No-one wants to try the alternative to gather the evidence!
The fact that most phishing sites don’t use SSL is because users don’t look for SSL or site identity. And that’s a UI and education problem we could fix, if the world put its mind to it. After all, most people now automatically buckle up when they get in a car. And now we have EV, we can build a site identity system on rock rather than sand.