Making Good Decisions

Mitchell has been focussed for a while on how Mozilla can make good decisions which are made quickly rather than getting bogged down, but which do not bypass the important step of getting the opinions of a diverse cross-section of interested and knowledgeable members of our community.

In relation to that, I’d like to re-draw everyone’s attention to Productive Discussion, a document which came out of a session at the Summit in Brussels in 2013, and which explains how best to hold a community consultation in a way which invites positive, useful input and avoids the paralysis of assuming that consensus is required before one can move forward.

If there’s a decision you are responsible for making and want to make it using best practice within our community, it’s a recommended read.

Awareness

I’ve been reading the excellent “Stuff White People Like“, billed as “The Definitive Guide to the Unique Taste of Millions”. It’s based on a well-read (although now seemingly dormant) satirical blog. Here’s an entry which particularly hit home:

18. Awareness. An interesting fact about white people is that they firmly believe all the world’s problems can be solved through “awareness” – meaning the process of making other people aware of problems, magically causing someone else, like the government, to fix it. This belief allows them to feel that sweet self-satisfaction without actually having to solve anything or face any difficult challenges, because the only challenge of raising awareness is getting the attention of people who are currently unaware.

What make this even more appealing for white people is that you can raise “awareness” through expensive dinners, parties, marathons, T-shirts, fashion shows, concerts and bracelets. In other words, white people just have to keep doing stuff they like, except that now they can feel better about making a difference.

Raising awareness is also awesome because once you raise awareness to an acceptable though arbitrary level, you can just back off and say, “Bam! Did my part. Now it’s your turn. Fix it.”

So, to summarise: you get all the benefits of helping (self-satisfaction, telling other people) but no need for difficult decisions or the ensuing criticism. (How do you criticise awareness?) Once again, white people find a way to score that sweet double victory.

Popular things to be aware of: the environment, diseases like cancer and AIDS, Africa, poverty, anorexia, homophobia, middle school field hockey/lacrosse teams, drug rehab, and political prisoners.

Top 50 DOS Problems Solved: Can I Have A Single Drive?

Q: I intend to upgrade from MS-DOS v3.3 to either DR DOS 6 or MS-DOS 5, both of which will allow me to have my 40MB hard disk configured as a single drive instead of being partitioned into twin 20MB drives. Am I right in thinking that to do this I will re-format my hard disk, and that I must first back up all the data? I dread doing this since I have almost 30MB on there.

A: First the bad news: yes, you will need to re-format your disk to take advantage of the ability to work with partitions greater than 32MB. However, backing up needn’t be as nasty a job as you think. But your question does beg another[0]: since backing up is going to be such a large job, it sounds as though you haven’t done it before.

… The most basic approach … would be to copy important data to a floppy disk, perhaps with the aid of a file compression utility such as LHA. If the worst happens, you simply reinstall applications from their original disks (or, much better still, back-ups of them) and copy your data back from floppy. [Or, you could use] a dedicated backup utility. My current favourite is Fastbak Plus (£110)[1].

32Mb ought to be enough for everyone? How did that work out – 512 byte sectors and a 16-bit index?

[0] No, it doesn’t – Ed.
[1] £210 in today’s money. For a backup program!

A Measure of Globalization

A couple of weeks ago, I decided I needed a steel 15cm ruler. This sort of ruler doesn’t have a margin at one end, and so is good for measuring distances away from walls and other obstructions. I found one on Amazon for 88p including delivery and, thinking that was excellent value, clicked “Buy now with 1-Click” and thought no more of it.

Today, after a slightly longer delay than I expected, it arrived. From Shenzhen.

I knew container transport by sea is cheap, but I am amazed that 88p can cover the cost of the ruler, the postage in China, the air freight, a payment to the delivery firm in the UK, and some profit. And, notwithstanding my copy of “Poorly Made in China” which arrived the same day and which I have not yet read, the quality seems fine…

Booklet Printing Calculator

Ever wanted to print a booklet in software which doesn’t directly support it? You can fake it by printing the pages in exactly the right order, but it’s a pain to work out by hand.

I found a JS booklet page order calculator on Github, enhanced it to support duplex printers, cleaned it up, and it’s now on my website.

Killing SHA-1 Properly

Currently, Mozilla’s ban on using the old and insecure SHA-1 hash algorithm as part of the construction of digital certificates is implemented via the ban in the CAB Forum Baseline Requirements, which we require all CAs to adhere to. However, implementing the ban via the BRs is problematic for a number of reasons:

  • It allows the issuance of SHA-1 certs in publicly-trusted hierarchies in those cases where the cert is not within scope of the BRs (e.g. email certs).
  • The scope of the BRs is a matter of debate, and so there are grey areas, as well as areas clearly outside scope, where SHA-1 issuance could happen.
  • Even when the latest version of Firefox stops trusting SHA-1 certs in January, a) that block is overrideable, and b) that doesn’t address risks to older versions.

Therefore, I’ve started a discussion on updating Mozilla’s CA policy to implement a “proper” SHA-1 ban, which we would implement via a CA Communication, and
then later in an updated version of our policy. See mozilla.dev.security.policy if you want to contribute to the discussion.

No Default Passwords

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.

Someone Thought This Was A Good Idea

You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!

decaf-tablet

Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:

battery-low

And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:

zoomed

And how do you define what label the tablet displays? Easy:

decaf-sd-card

Seriously, can any reader give me one single advantage this system has over a paper label?

Security Updates Not Needed

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

WoSign and StartCom

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Off Trial

Six weeks ago, I posted “On Trial”, which explained that I was taking part in a medical trial in Manchester. In the trial, I was trying out some interesting new DNA repair pathway inhibitors which, it was hoped, might have a beneficial effect on my cancer. However, as of ten days ago, my participation has ended. The trial parameters say that participants can continue as long as their cancer shrinks or stays the same. Scans are done every six weeks to determine what change, if any, there has been. As mine had been stable for the five months before starting participation, I was surprised to discover that after six weeks of treatment my liver metastasis had grown by 7%. This level of growth was outside the trial parameters, so they concluded (probably correctly!) the treatment was not helping me and that was that.

The Lord has all of this in his hands, and I am confident of his good purposes for me :-)

GPLv2 Combination Exception for the Apache 2 License

CW: heavy open source license geekery ahead.

One unfortunate difficulty with open source licensing is that some lawyers, including the FSF, consider the Apache License 2.0 incompatible with the GPL 2.0, which is to say that you can’t combined Apache 2.0-licensed code with GPL 2.0-licensed code and distribute the result. This is annoying because when choosing a permissive licence, we want people to use the more modern Apache 2.0 over the older BSD or MIT licenses, because it provides some measure of patent protection. And this incompatibility discourages people from doing that.

This was a concern for Mozilla when determining the correct licensing for Rust, and this is why the standard Rust license is a dual license – the choice of Apache 2.0 or MIT. The idea was that Apache 2.0 would be the normal license, but people could choose MIT if they wanted to combine “Rust license” code with GPL 2.0 code.

However, the LLVM project has now had notable open source attorney Heather Meeker come up with an exception to be added to the Apache 2.0 license to enable GPL 2.0 compatibility. This exception meets a number of important criteria for a legal fix for this problem:

  • It’s an additional permission, so is unlikely to affect the open source-ness of the license;
  • It doesn’t require the organization using it to take a position on the question of whether the two licenses are actually compatible or not;
  • It’s specific to the GPL 2.0, thereby constraining its effects to solving the problem.

Here it is:

—- Exceptions to the Apache 2.0 License: —-

In addition, if you combine or link compiled forms of this Software with software that is licensed under the GPLv2 (“Combined Software”) and if a court of competent jurisdiction determines that the patent provision (Section 3), the indemnity provision (Section 9) or other Section of the License conflicts with the conditions of the GPLv2, you may retroactively and prospectively choose to deem waived or otherwise exclude such Section(s) of the License, but only in their entirety and only with respect to the Combined Software.

—- end —-

It seems very well written to me; I wish it had been around when we were licensing Rust.

Introducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

Something You Know And… Something You Know

The email said:

To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

This is united.com’s idea of two-factor authentication:

united.com screenshot asking two security questions because my device is unknown

It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

Auditing the Trump Campaign

When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

  • Project Name: Donald J. Trump for President
  • Project Website: https://www.donaldjtrump.com/
  • Project Description: Make America great again
  • What is the maintenance status of the project? Look at the polls, we are winning!
  • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.