No Default Passwords

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.

Security Updates Not Needed

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

WoSign and StartCom

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Introducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

Something You Know And… Something You Know

The email said:

To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

This is’s idea of two-factor authentication: screenshot asking two security questions because my device is unknown

It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

Auditing the Trump Campaign

When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

  • Project Name: Donald J. Trump for President
  • Project Website:
  • Project Description: Make America great again
  • What is the maintenance status of the project? Look at the polls, we are winning!
  • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.

Mozilla’s Root Store Housekeeping Program Bears Fruit

Just over a year ago, in bug 1145270, we removed the root certificate of e-Guven (Elektronik Bilgi Guvenligi A.S.), a Turkish CA, because their audits were out of date. This is part of a larger program we have to make sure all the roots in our program have current audits and are in other ways properly included.

Now, we find that e-Guven has contrived to issue an X509 v1 certificate to one of their customers.

The latest version of the certificate standard X509 is v3, which has been in use since at least the last millennium. So this is ancient magic and requires spelunking in old, crufty RFCs that don’t use current terminology but as far as I can understand it, whether a certificate is a CA certificate or an end-entity certificate in X509v1 is down to client convention – there’s no way of saying so in the certificate. In other words, they’ve accidentally issued a CA certificate to one of their customers, much like TurkTrust did. This certificate could itself issue certificates, and they would be trusted in some subset of clients.

But not Firefox, fortunately, thanks to the hard work of Kathleen Wilson, the CA Certificates module owner. Neither current Firefox nor the current or previous ESR trust this root any more. If they had, we would have had to go into full misissuance mode. (This is less stressful than it used to be due to the existence of OneCRL, our system for pushing revocations out, but it’s still good to avoid.)

Now, we aren’t going to prevent all misissuance problems by removing old CAs, but there’s still a nice warm feeling when you avoid a problem due to forward-looking preventative action. So well done Kathleen.

An IoT Vision

Mark’s baby daughter keeps waking up in the middle of the night. He thinks it might be because the room is getting too cold. So he goes down to the local electronics shop and buys a cheap generic IoT temperature sensor.

He takes it home and presses a button on his home’s IoT hub, then swipes the thermometer across the top. A 5 cent NFC tag attached to it tells the hub that this is a device in the “temperature sensor” class (USB-style device classing), accessible over Z-wave, and gives its public key, a password the hub can use to authenticate back to the sensor, and a URL to download a JavaScript driver. The hub shows a green light to show that the device has been registered.

Mark sticks a AAA battery into the sensor and places it on the wall above his baby’s cot. He goes to his computer and brings up his hub’s web interface. It has registered the new device and connected to it securely over the appropriate protocol (the hub speaks Bluetooth LE, wifi and Z-wave). The connection is secure from the start, and requires zero additional configuration. The hub has also downloaded the JS driver and is running it in a sandboxed environment where it can communicate only with the sensor and has access to nothing else. If it were to want to communicate with the outside world, the hub manages the SSL (rather than the device or the driver) so it can log all traffic in cleartext.

Mark views the device’s simple web page (generated by the driver) and sees the room is at 21C. He asks the hub to sample the value every minute and make a chart of the results. The hub knows how to do this for various simple device classes, including temperature sensors.

The next morning, he checks the chart and indeed, at 3am when the baby woke up, the temperature was only 15C. He goes back to the electrical shop and buys an IoT mains passthrough plug and a cheap heater. He registers the plug with the hub as before, then plugs the heater into the passthrough, and the passthrough into a socket in the baby’s room.

Back at the web interface, he gives permission for the plug’s driver to see data from the temperature sensor. However, the default driver for the plug doesn’t have the ability to react to external events. So he downloads an open source one which drives that device class. Anyone can write drivers for a device class because the specs for each class are open. He then tells the new driver to read the temperature sensor, and turn the plug on if the temperature drops below 18C, and off if it rises to 21C. The next night, the baby sleeps through. Success!

The key features of this system are:

  • the automatic registration and instant security, based on a cheap NFC tag which implements an open standard, which allows device makers to make their devices massively easier to use (IoT device return/refund levels are a big problem at the moment);
  • the JS host environment on the hub, which means you can run untrusted code on your network in a sandbox so you can buy IoT devices without the risk of letting random companies snoop on your data, and every device or ecosystem doesn’t need to come with its own controller; and
  • the open standard and device classes which mean all devices and all software is hackable.

Wouldn’t it be great if someone built something like this?

Anonymity and the Secure Web

Ben Klemens has written an essay criticising Mozilla’s moves towards an HTTPS web. In particular, he is worried about the difficulty of setting up an HTTPS website and the fact that (as he sees it) getting a certificate requires the disclosure of personal information. There were some misunderstandings in his analysis, so I wanted to add a comment to clarify what we are actually planning to do, and how we are going to meet his concerns.

However, he wrote it on Medium. Medium does not have its own login system; it only permits federated login using Twitter or Facebook. Here’s the personal information I would have to give away to Medium (and the powers I would have to give it) in order to comment on his essay about the problems Mozilla are supposedly causing by requiring people to give away personal information:


Don’t like that? That’s OK, I could use Facebook login, if I was willing to give away:


So I’ll have to comment here and hope he sees it. (Anyone who has decided the tradeoffs on Medium are worth it could perhaps post the URL in a comment for me.)

The primary solution to his issues is Let’s Encrypt. With Let’s Encrypt, you will be able to get a cert, which works in 99%+ of browsers anyone uses, without needing to supply any personal information or to pay, and all at the effort of running a single command on the command line. That is, the command line of the machine (or VM) that you have rented from the service provider and to whom you gave your credit card details and make a monthly payment to put up your DIY site. That machine. And the cert will be for the domain name that you pay your registrar a yearly fee for, and to whom you have also provided your personal information. That domain name.

If you have a source of free, no-information-required server hosting and free, no-information-required domain names (as Ben happens to for his Caltech Divinity School example), then it’s reasonable to say that you are a little inconvenienced if your HTTPS certificate is not also free and no-information-required. But most people doing homebrew DIY websites aren’t in that position – they have to rent such things. Once Let’s Encrypt is up and running, the situation with certificates will actually be easier and more anonymous than that with servers or domain names.

“Browsers no longer supporting HTTP” may well never happen, and it’s a long way off if it does. But insofar as the changes we do make are some small infringement on your right to build an insecure website, see it as a civic requirement, like passing a driving test. This is a barrier to someone just getting in a car and driving, but most would suggest it’s reasonable given the wider benefit to society of training those in control of potentially dangerous technology. Given the Great Cannon and similar technologies, which can repurpose accesses to any website as a DDOS tool, there are no websites which “don’t need to be secure”.

HSBC: Bad Security

I would like to use a stronger word than “bad” in the title, but decency forbids.

HSBC has, or used to have, a compulsory 2-factor system for logging in to their online banking. It used a small widget called a Secure Key. This is good. Now, they have rolled out an Android/iOS/Blackberry app alternative. This is also good, on balance.

However, at the same time, they have instituted a system where you can log on and see all your banking information and even take some actions without the key, just using a password. This is bad. Can I opt out, and say “no, I’d like to always use the key, please?” No, it seems I can’t. Compulsory lowered security for me. Even if I don’t use the password, that login mechanism will always be there.

OK, so I go to set a password. Never mind, I think, I’ll pick something long and complicated. But no; the guidance says:

Your password is not case sensitive and must be between 8 and 30 characters. It must include letters and numbers.

So the initial passphrase I picked was both too long, and didn’t include a number. However, the only error it gives is “This data is invalid”. I tried several other variants of my thought-of passphrase, but couldn’t get it to accept it. Painful reverse-engineering showed that the space character is also forbidden. Thank you so much, HSBC.

I finally find a password it’ll accept and click “Continue”. But, no. “Your session is invalidated – please log in again.” It’s taken so long to find a password it’ll accept that it has timed me out.

How to Responsibly Publish a Misissued SSL Certificate

I woke up this morning wanting to write a blog post, then I found that someone else had already written it. Thank you, Andrew.

If you succeed in getting a certificate misissued to you, then that has the opportunity to be a great learning experience for the site, the CA, the CAB Forum, or all three. Testing security is, to my mind, generally a good thing. But publishing the private key turns it from a great learning experience into a browser emergency update situation (at least at the moment, in Firefox, although we are working to make this easier with OneCRL).

Friends don’t publish private keys for certs for friends’ domain names. Don’t be that guy. :-)

An Encounter with Ransomware

An organization which I am associated with (not Mozilla) recently had its network infected with the CryptoWall 3.0 ransomware, and I thought people might be interested in my experience with it.

The vector of infection is unknown but once the software ran, it encrypted most data files (chosen by extension) on the local hard drive and all accessible shares, left little notes everywhere explaining how to get the private key, and deleted itself. The notes were placed in each directory where files were encrypted, as HTML, TXT, PNG and as a URL file which takes you directly to their website.

Their website is accessible as either a TOR hidden service or over plain HTTP – both options are given. Presumably plain HTTP is for ease for less technical victims; Tor is for if their DNS registrations get attacked. However, as of today, that hasn’t happened – the site is still accessible either way (although it was down for a while earlier in the week). Access is protected by a CAPTCHA, presumably to prevent people writing automated tools that work against it. It’s even localised into 5 languages.

CryptoWall website CAPTCHA

The price for the private key was US$500. (I wonder if they set that based on GeoIP?) However, as soon as I accessed the custom URL, it started a 7-day clock, after which the price doubled to US$1000. Just like parking tickets, they incentivise you to pay up quickly, because argument and delay will just make it cost more. If you haven’t paid after a month, they delete your secret key and personal page.

While what these thieves do is illegal, immoral and sinful, they do run a very professional operation. The website had the following features:

  • A “decrypt one file” button, which allows them to prove they have the private key and re-establish trust. It is, of course, also protected by a CAPTCHA. (I didn’t investigate to see whether it was also protected by numerical limits.)
  • A “support” button, which allows you to send a message to the thieves in case you are having technical difficulties with payment or decryption.

The organization’s last backup was a point-in-time snapshot from July 2014. “Better backups” had been on the ToDo list for a while, but never made it to the top. After discussion with the organization, we decided that recreating the data would have taken much more time than the value of the ransom, and so were were going to pay. I tried out the “Decrypt One File” function and it worked, so I had some confidence that they were able to provide what they said they were.

I created a wallet at, and used an exchange to buy exactly the right amount of Bitcoin. (The first exchange I tried had a ‘no ransomware’ policy, so I had to go elsewhere.) However, when I then went to pay, I discovered that there was a 0.0001BTC transaction fee, so I didn’t have enough to pay them the full amount! I was concerned that they had automated validation and might not release the key if the amount was even a tiny bit short. So, I had to go on IRC and talk to friends to blag a tiny fraction of Bitcoin in order to afford the transfer fee.

I made the payment, and pasted the transaction ID into the form on the ransomware site. It registered the ID and set status to “pending”. Ten or twenty minutes later, once the blockchain had moved on, it accepted the transaction and gave me a download link.

While others had suggested that there was no guarantee that we’d actually get the private key, it made sense to me. After all, word gets around – if they don’t provide the keys, people will stop paying. They have a strong incentive to provide good ‘customer’ service.

The download was a ZIP file containing a simple Windows GUI app which was a recursive decryptor, plus text files containing the public key and the private key. The app worked exactly as advertised and, after some time, we were able to decrypt all of the encrypted files. We are now putting in place a better backup solution, and better network security.

A friend who is a Bitcoin expert did do a little “following the money”, although we think it went into a mixer fairly quickly. However, before it did so, it was aggregated into an account with $80,000+ in it, so it seems that this little enterprise is fairly lucrative.

So, 10/10 for customer service, 0/10 for morality.

The last thing I did was send them a little message via the “Support” function of their website, in both English and Russian:

Such are the ways of everyone who is greedy for unjust gain; it takes away the life of its possessors.

Таковы пути всех, кто жаждет преступной добычи; она отнимает жизнь у завладевших ею.

‘The time has come,’ Jesus said. ‘The kingdom of God has come near. Repent and believe the good news!’

– Пришло время, – говорил Он, – Божье Царство уже близко! Покайтесь и верьте в Радостную Весть!

Alice and Bob Are Weird

Suppose Alice and Bob live in a country with 50 states. Alice is currently in state a and Bob is currently in state b. They can communicate with one another and Alice wants to test if she is currently in the same state as Bob. If they are in the same state, Alice should learn that fact and otherwise she should learn nothing else about Bob’s location. Bob should learn nothing about Alice’s location.

They agree on the following scheme:

  • They fix a group G of prime order p and generator g of G

Cryptographic problems. Gotta love ’em.

Google Concedes Google Code Not Good Enough?

Google recently released an update to End-to-End, their communications security tool. As part of the announcement, they said:

We’re migrating End-To-End to GitHub. We’ve always believed strongly that End-To-End must be an open source project, and we think that using GitHub will allow us to work together even better with the community.

They didn’t specifically say how it was hosted before, but a look at the original announcement tells us it was here – on Google Code. And indeed, when you visit that link now, it says “Project “end-to-end” has moved to another location on the Internet”, and offers a link to the Github repo.

Is Google admitting that Google Code just doesn’t cut it any more? It certainly doesn’t have anything like the feature set of Github. Will we see it in the next round of Google spring-cleaning in 2015?