Don’t Pin To A Single CA

If you do certificate pinning, either via HPKP, or in your mobile app, or your IoT device, or your desktop software, or anywhere… do not pin solely to a single certificate, whether it’s a leaf certificate, intermediate or root certificate, and do not pin solely to certificates from a single CA. This is the height of self-imposed Single Point of Failure foolishness, and has the potential to bite you in the ass. If your CA goes away or becomes untrusted and it causes you problems, no-one will be sympathetic.

This Has Been A Public Service Announcement.

Firefox Secure Travel Addon

In these troubled times, business travellers occasionally have to cross borders where the border guards have significant powers to seize your electronic devices, and even compel you to unlock them or provide passwords. You have the difficult choice between refusing, and perhaps not getting into the country, or complying, and having sensitive data put at risk.

It is possible to avoid storing confidential data on your device if it’s all in the cloud, but then your browser is logged into (or has stored passwords for) various important systems which have lots of sensitive data, so anyone who has access to your machine has access to that data. And simply deleting all these passwords and cookies is a) a pain, and b) hard to recover from.

What might be very cool is a Firefox Secure Travel addon where you press a “Travelling Now” button and it:

  • Disconnects you from Sync
  • Deletes all cookies for a defined list of domains
  • Deletes all stored passwords for the same defined list of domains

Then when you arrive, you can log back in to Sync and get your passwords back (assuming it doesn’t propagate the deletions!), and log back in to the services.

I guess the border authorities can always ask for your Sync password but there’s a good chance they might not think to do that. A super-paranoid version of the above would also:

  • Generate a random password
  • Submit it securely to a company-run web service
  • On receiving acknowledgement of receipt, change your Sync password to
    the random password

Then, on arrival, you just need to call your IT department (who would ID you e.g. by voice or in person) to get the random password from them, and you are up and running. In the mean time, your data is genuinely out of your reach. You can unlock your device and tell them any passwords you know, and they won’t get your data.

Worth doing?

FOSDEM Talk: Video Available

I spoke on Sunday at the FOSDEM conference in the Policy devroom about the Mozilla Root Program, and about the various CA-related incidents of the past 5 years. Here’s the video (48 minutes, WebM):

Given that this only happened two days ago, I should give kudos to the FOSDEM people for their high quality and efficient video processing operation.

Speaking at FOSDEM on the Mozilla Root Program

Like every year for the past ten or more (except for a couple of years when my wife was due to have a baby), I’ll be going to FOSDEM, the premier European grass-roots FLOSS conference. This year, I’m speaking on the Policy and Legal Issues track, with the title “Reflections on Adjusting Trust: Tales of running an open and transparent Certificate Authority Program“. The talk is on Sunday at 12.40pm in the Legal and Policy Issues devroom (H.1301), and I’ll be talking about how we use the Mozilla root program to improve the state of security and encryption on the Internet, and the various CA misdemeanours we have found along the way. Hope to see you there :-)

Note that the Legal and Policy Issues devroom is usually scarily popular; arrive early if you want to get inside.

Security Audit Finds Nothing: News At 11

Secure Open Source is a project, stewarded by Mozilla, which provides manual source code audits for key pieces of open source software. Recently, we had a trusted firm of auditors, Cure53, examine the dovecot IMAP server software, which runs something like two thirds of all IMAP servers worldwide. (IMAP is the preferred modern protocol for accessing an email store.)

The big news is that they found… nothing. Well, nearly nothing. They managed to scrape up 3 “vulnerabilities” of Low severity.

Cure53 write:

Despite much effort and thoroughly all-encompassing approach, the Cure53 testers only managed to assert the excellent security-standing of Dovecot. More specifically, only three minor security issues have been found in the codebase, thus translating to an exceptionally good outcome for Dovecot, and a true testament to the fact that keeping security promises is at the core of the Dovecot development and operations.

Now, if we didn’t trust our auditors and they came back empty handed, we might suspect them of napping on the job. But we do, and so this sort of result, while seemingly a “failure” or a “waste of money”, is the sort of thing we’d like to see more of! We will know Secure Open Source, and other programs to improve the security of FLOSS code, are having an impact when more and more security audits come back with this sort of result. So well done to the dovecot maintainers; may they be the first of many.

No Default Passwords

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.

Security Updates Not Needed

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

WoSign and StartCom

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Introducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

Something You Know And… Something You Know

The email said:

To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

This is united.com’s idea of two-factor authentication:

united.com screenshot asking two security questions because my device is unknown

It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

Auditing the Trump Campaign

When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

  • Project Name: Donald J. Trump for President
  • Project Website: https://www.donaldjtrump.com/
  • Project Description: Make America great again
  • What is the maintenance status of the project? Look at the polls, we are winning!
  • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.

Mozilla’s Root Store Housekeeping Program Bears Fruit

Just over a year ago, in bug 1145270, we removed the root certificate of e-Guven (Elektronik Bilgi Guvenligi A.S.), a Turkish CA, because their audits were out of date. This is part of a larger program we have to make sure all the roots in our program have current audits and are in other ways properly included.

Now, we find that e-Guven has contrived to issue an X509 v1 certificate to one of their customers.

The latest version of the certificate standard X509 is v3, which has been in use since at least the last millennium. So this is ancient magic and requires spelunking in old, crufty RFCs that don’t use current terminology but as far as I can understand it, whether a certificate is a CA certificate or an end-entity certificate in X509v1 is down to client convention – there’s no way of saying so in the certificate. In other words, they’ve accidentally issued a CA certificate to one of their customers, much like TurkTrust did. This certificate could itself issue certificates, and they would be trusted in some subset of clients.

But not Firefox, fortunately, thanks to the hard work of Kathleen Wilson, the CA Certificates module owner. Neither current Firefox nor the current or previous ESR trust this root any more. If they had, we would have had to go into full misissuance mode. (This is less stressful than it used to be due to the existence of OneCRL, our system for pushing revocations out, but it’s still good to avoid.)

Now, we aren’t going to prevent all misissuance problems by removing old CAs, but there’s still a nice warm feeling when you avoid a problem due to forward-looking preventative action. So well done Kathleen.

An IoT Vision

Mark’s baby daughter keeps waking up in the middle of the night. He thinks it might be because the room is getting too cold. So he goes down to the local electronics shop and buys a cheap generic IoT temperature sensor.

He takes it home and presses a button on his home’s IoT hub, then swipes the thermometer across the top. A 5 cent NFC tag attached to it tells the hub that this is a device in the “temperature sensor” class (USB-style device classing), accessible over Z-wave, and gives its public key, a password the hub can use to authenticate back to the sensor, and a URL to download a JavaScript driver. The hub shows a green light to show that the device has been registered.

Mark sticks a AAA battery into the sensor and places it on the wall above his baby’s cot. He goes to his computer and brings up his hub’s web interface. It has registered the new device and connected to it securely over the appropriate protocol (the hub speaks Bluetooth LE, wifi and Z-wave). The connection is secure from the start, and requires zero additional configuration. The hub has also downloaded the JS driver and is running it in a sandboxed environment where it can communicate only with the sensor and has access to nothing else. If it were to want to communicate with the outside world, the hub manages the SSL (rather than the device or the driver) so it can log all traffic in cleartext.

Mark views the device’s simple web page (generated by the driver) and sees the room is at 21C. He asks the hub to sample the value every minute and make a chart of the results. The hub knows how to do this for various simple device classes, including temperature sensors.

The next morning, he checks the chart and indeed, at 3am when the baby woke up, the temperature was only 15C. He goes back to the electrical shop and buys an IoT mains passthrough plug and a cheap heater. He registers the plug with the hub as before, then plugs the heater into the passthrough, and the passthrough into a socket in the baby’s room.

Back at the web interface, he gives permission for the plug’s driver to see data from the temperature sensor. However, the default driver for the plug doesn’t have the ability to react to external events. So he downloads an open source one which drives that device class. Anyone can write drivers for a device class because the specs for each class are open. He then tells the new driver to read the temperature sensor, and turn the plug on if the temperature drops below 18C, and off if it rises to 21C. The next night, the baby sleeps through. Success!


The key features of this system are:

  • the automatic registration and instant security, based on a cheap NFC tag which implements an open standard, which allows device makers to make their devices massively easier to use (IoT device return/refund levels are a big problem at the moment);
  • the JS host environment on the hub, which means you can run untrusted code on your network in a sandbox so you can buy IoT devices without the risk of letting random companies snoop on your data, and every device or ecosystem doesn’t need to come with its own controller; and
  • the open standard and device classes which mean all devices and all software is hackable.

Wouldn’t it be great if someone built something like this?