A Measure of Globalization

A couple of weeks ago, I decided I needed a steel 15cm ruler. This sort of ruler doesn’t have a margin at one end, and so is good for measuring distances away from walls and other obstructions. I found one on Amazon for 88p including delivery and, thinking that was excellent value, clicked “Buy now with 1-Click” and thought no more of it.

Today, after a slightly longer delay than I expected, it arrived. From Shenzhen.

I knew container transport by sea is cheap, but I am amazed that 88p can cover the cost of the ruler, the postage in China, the air freight, a payment to the delivery firm in the UK, and some profit. And, notwithstanding my copy of “Poorly Made in China” which arrived the same day and which I have not yet read, the quality seems fine…

Booklet Printing Calculator

Ever wanted to print a booklet in software which doesn’t directly support it? You can fake it by printing the pages in exactly the right order, but it’s a pain to work out by hand.

I found a JS booklet page order calculator on Github, enhanced it to support duplex printers, cleaned it up, and it’s now on my website.

Killing SHA-1 Properly

Currently, Mozilla’s ban on using the old and insecure SHA-1 hash algorithm as part of the construction of digital certificates is implemented via the ban in the CAB Forum Baseline Requirements, which we require all CAs to adhere to. However, implementing the ban via the BRs is problematic for a number of reasons:

  • It allows the issuance of SHA-1 certs in publicly-trusted hierarchies in those cases where the cert is not within scope of the BRs (e.g. email certs).
  • The scope of the BRs is a matter of debate, and so there are grey areas, as well as areas clearly outside scope, where SHA-1 issuance could happen.
  • Even when the latest version of Firefox stops trusting SHA-1 certs in January, a) that block is overrideable, and b) that doesn’t address risks to older versions.

Therefore, I’ve started a discussion on updating Mozilla’s CA policy to implement a “proper” SHA-1 ban, which we would implement via a CA Communication, and
then later in an updated version of our policy. See mozilla.dev.security.policy if you want to contribute to the discussion.

No Default Passwords

One of the big problems with IoT devices is default passwords – here’s the list coded into the malware that attacked Brian Krebs. But without a default password, you have to make each device unique and then give the randomly-generated password to the user, perhaps by putting it on a sticky label. Again, my IoT vision post suggests a better solution. If the device’s public key and a password are in an RFID tag on it, and you just swipe that over your hub, the hub can find and connect securely to the device over SSL, and then authenticate itself to the device (using the password) as the user’s real hub, with zero configuration on the part of the user. And all of this works without the need for any UI or printed label which needs to be localized. Better usability, better security, better for the internet.

Someone Thought This Was A Good Idea

You know that problem where you want to label a coffee pot, but you just don’t have the right label? Technology to the rescue!


Of course, new technology does come with some disadvantages compared to the old, as well as its many advantages:


And pinch-to-zoom on the picture viewer (because that’s what it uses) does mean you can play some slightly mean tricks on people looking for their caffeine fix:


And how do you define what label the tablet displays? Easy:


Seriously, can any reader give me one single advantage this system has over a paper label?

Security Updates Not Needed

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

WoSign and StartCom

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Off Trial

Six weeks ago, I posted “On Trial”, which explained that I was taking part in a medical trial in Manchester. In the trial, I was trying out some interesting new DNA repair pathway inhibitors which, it was hoped, might have a beneficial effect on my cancer. However, as of ten days ago, my participation has ended. The trial parameters say that participants can continue as long as their cancer shrinks or stays the same. Scans are done every six weeks to determine what change, if any, there has been. As mine had been stable for the five months before starting participation, I was surprised to discover that after six weeks of treatment my liver metastasis had grown by 7%. This level of growth was outside the trial parameters, so they concluded (probably correctly!) the treatment was not helping me and that was that.

The Lord has all of this in his hands, and I am confident of his good purposes for me :-)

GPLv2 Combination Exception for the Apache 2 License

CW: heavy open source license geekery ahead.

One unfortunate difficulty with open source licensing is that some lawyers, including the FSF, consider the Apache License 2.0 incompatible with the GPL 2.0, which is to say that you can’t combined Apache 2.0-licensed code with GPL 2.0-licensed code and distribute the result. This is annoying because when choosing a permissive licence, we want people to use the more modern Apache 2.0 over the older BSD or MIT licenses, because it provides some measure of patent protection. And this incompatibility discourages people from doing that.

This was a concern for Mozilla when determining the correct licensing for Rust, and this is why the standard Rust license is a dual license – the choice of Apache 2.0 or MIT. The idea was that Apache 2.0 would be the normal license, but people could choose MIT if they wanted to combine “Rust license” code with GPL 2.0 code.

However, the LLVM project has now had notable open source attorney Heather Meeker come up with an exception to be added to the Apache 2.0 license to enable GPL 2.0 compatibility. This exception meets a number of important criteria for a legal fix for this problem:

  • It’s an additional permission, so is unlikely to affect the open source-ness of the license;
  • It doesn’t require the organization using it to take a position on the question of whether the two licenses are actually compatible or not;
  • It’s specific to the GPL 2.0, thereby constraining its effects to solving the problem.

Here it is:

—- Exceptions to the Apache 2.0 License: —-

In addition, if you combine or link compiled forms of this Software with software that is licensed under the GPLv2 (“Combined Software”) and if a court of competent jurisdiction determines that the patent provision (Section 3), the indemnity provision (Section 9) or other Section of the License conflicts with the conditions of the GPLv2, you may retroactively and prospectively choose to deem waived or otherwise exclude such Section(s) of the License, but only in their entirety and only with respect to the Combined Software.

—- end —-

It seems very well written to me; I wish it had been around when we were licensing Rust.

Introducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

Is Plagiarism A Sin?

In the last year or so there have been several occasions where it has been discovered that some words in books written by Christian authors were not their own words, but yet were not footnoted as being written by someone else. This occurrence is usually referred to as “plagiarism”. The publishers of the books in question have reacted by halting sales of the affected books, sometimes forever, or sometimes until this can be corrected. This both harms the public, who are deprived of the wisdom such books contain, and harms the reputation of the author, who is labelled a plagiarist. Therefore, it is important to be certain that the act the author has committed is, in fact, sinful. If it is, fair enough. But if it is not, both the removal from sale of the book and the loss of reputation are unwarranted and harmful.

Contemporary academic standards certainly see plagiarism as a serious misdemeanour. In a context where work has to be marked for credit, it’s clearly important that the marker knows which of the work is the student’s own, and which is taken from others. In this case, the act is clearly wrong – but one could argue either that it is wrong per se, or that it’s wrong because the student is breaking a promise they made to attribute all their quotations correctly – when the sin would be “not keeping one’s word”, rather than plagiarism.

Don Carson also writes that giving another’s sermon is also a sin. I would not agree with all his reasons but certainly would agree with reason number 3 – “you are not devoting yourself to the study of the Bible to the end that God’s truth captures you, molds you, makes you a man of God and equips you to speak for him”. Preachers should not use the words of others as a way of avoiding engaging in their God-given and weighty task.

However, I think plagiarism is not a sin in itself, and I base my argument on the construction of Scripture. Scripture contains many examples of what today would be called plagiarism – unattributed use of the words of another. Many of these are where words are taken from other places in Scripture, but there are also some where words are taken from outside Scripture. Not all such quotations are unattributed, but many are. Large examples include:

* The dependence of Kings on other books (e.g. (e.g. 2 Kings 18-20 is basically the same as Isaiah 36-39; 2 Kings 25 is nearly identical to Jeremiah 52)
* The dependence of Chronicles on Kings
* The dependence of Matthew on Mark (e.g. Mark 2:1-12 has strong similarities with Matthew 9:1-8)
* The dependence of Luke on Mark
* The dependence of 2 Peter on Jude (or the other way around, if you prefer)

And that’s before you consider Q (a proposed document also used by Luke and Matthew, so unattributed that scholars argue about its very existence), and all the times the New Testament quotes or alludes to the Old (most of which are unattributed). Paul quotes 3 Greek philosophers in various places; none of the quotes are attributed by name, even though Paul must have known the names. They are in quotation marks in our modern Bibles, but Greek does not have quotation marks. One could go on. Any one of these examples would prove my point.

If it is a sin in all circumstances to take the words of others and pass them off as your own, then the very construction of Scripture as we have it involved its authors doing this sinful act many times. While Scripture describes sins, and was written down by sinners, I don’t believe that God would have used sinful methods in the process of assembling his good and perfect word – because if the existence of something necessarily depends on sin, how can it be described as good and perfect? If God thinks that unattributed quotation is a sin, why would he not have caused the authors to attribute all their quotations, thereby setting us all a good example? If plagiarism is a sin, lack of attribution of quotations is a flaw in Scripture itself.

The idea that copying the work of others, either at all or without attribution, is unreasonable is a relatively recent one in history. Copyright has only been around since the Statute of Anne in 1707, and that was a measure designed at controlling publishers rather than restricting people’s ability to quote. More recently in history, such things have been thought about under the banner of “intellectual property”, a name which rather begs the question, as it’s not clear at all that such things should be treated in the same manner as physical property. This concept of a particular person “owning” a set of words or ideas was unknown until relatively recently. The fact that these ideas are innovative should certainly give us caution in suggesting that they are reflections of the moral will of God which humans had been unaware of until 300 years ago. Christians have always built on the wisdom God has given those before them, and we should be wary of any man-made laws which restrict that free flow of ideas forward in time.

Nevertheless, the law is the law – is plagiarism wrong because it’s a breach of copyright law as it stands today, and Christians are called to obey the law (Romans 13)? The answer is that it depends on the context and the level of copying. Copyright law has exceptions to try and balance its view that the author should have control of their work with what it sees as the legitimate rights and desires of the public. But the way copyright law works in the UK is that the law doesn’t provide actual affirmation that certain exempt acts are OK, it instead provides for defences in court. This means that the only way to know for certain whether a particular use is an infringement or not is to ask a judge. This is relevant because of the legal doctrine of de minimis – below a certain level, a court would undoubtedly refuse to waste its time with a copyright infringement.

Nevertheless, it is reasonable to ask if this behaviour seems to be covered by an exception. Exceptions unfortunately vary from country to country; in the UK, there is an exception for “criticism, review, quotation and news reporting“, which is certainly designed to permit quotation of one book in another. It does require “sufficient acknowledgement (unless this would be impossible for reasons of practicality or otherwise)” (section 30). It could certainly be argued that if you took notes 20 years ago and neglected to record the source, it is now practically impossible to acknowledge it. One might consider prosecution if attribution were intentionally left off; would a prosecutor really do so if it were done unintentionally?

Earlier, I noted that in some contexts, plagiarism can be sinful because it involves breaking a promise. Can that be the case in commercial publishing? There are two possible promises to consider – that of the author to his publisher, and that of the author (and publisher) to the readership.

Let us consider the author/publisher relationship first. Having not yet authored my first best-seller I am not familiar with the contracts that authors draw up with publishers. These may well contain a clause saying the author will attribute all quotations, or perhaps make a good faith effort to do so. However, someone’s culpability for breaking a promise depends significantly on intent and circumstances. If I promise my wife to be home by a certain time and my train is late (and I write this while waiting for a train after missing a connection due to a late incoming train), I would suggest only an unreasonable wife would take me to task for this. If an author deliberately plagiarises others when having promised not to do so, that is a clear case of a broken promise. If they do so accidentally, is pulping all copies of their book a proportionate response?

The second situation to consider is the possibility that an author makes an implicit promise to his readership. I think this argument is stronger in an academic work where the footnotes average a third of each page, than in a non-academic work which has 20 endnotes in total. There are different reader expectations in each case. But how normative are reader expectations? I expect books I buy to be written in good English, theologically sound, thought-provoking and enlightening. These expectations are, sadly, often not met, but I don’t expect the publisher to pulp the book in response to my complaint! To add to that, one is on shaky ground construing promises where no explicit promise has been made. Lastly, the point about what one does if a promise is broken accidentally (as opposed to wilfully) still stands.

Plagiarism may be problematic and unwise in certain circumstances. For example, it makes it harder to trace the history of an idea back to its source, which is often important in avoiding groupthink and validating “what everyone knows”. But we must avoid the genetic fallacy – the worthiness of an idea or thought is not connected to whose idea or thought it was. If a book explains the Trinity well, it does not suddenly do so less well if it’s discovered that some of the words were not written by the author named on the cover.

So I would suggest that the idea that plagiarism is always and everywhere wrong is a recent innovation and not a reflection of the moral will of God. Intentionally breaking one’s explicit promises is sinful; plagiarism itself alone is not.

A Cycle of Fear and Irrationality

Scared people act irrationally.

I’ve been pulled over a couple of times in my life, to receive a talking-to from a traffic policeman about a piece of dubious (although not dangerous) driving. I’m not afraid of the police, though. Some people are. And when people are afraid, they do irrational and unwise things – like running away from what they fear.

A man commits a minor traffic infraction, and runs from the police. 7 police break down the door of his house, enter with guns drawn, tase him, and pepper-spray his 84-year-old mother, before pinning her to the ground and arresting her as she cried “Help me, Jesus”. What sort of country does this kind of massive overreaction happen in? One guess. In the UK, the registered keeper of the vehicle would probably have got a £50 fine in the post two weeks later. Do our roads contain a significantly higher incidence of dangerous driving?

Scared people act irrationally. Why are people scared of the police? Because of incidents like this. Why does this sort of thing happen? Because people act irrationally and the police see it as a provocation. This is a cycle that isn’t going to be easily broken. But the burden of breaking it lies with the police.

Last year in the US, the police killed around 1146 people. In an average year in the entirety of the UK (population: 1/5th of the US), the police fire their guns at all less than 10 times. Are US suspects really so much more dangerous than UK ones?

Something You Know And… Something You Know

The email said:

To better protect your United MileagePlus® account, later this week, we’ll no longer allow the use of PINs and implement two-factor authentication.

This is united.com’s idea of two-factor authentication:

united.com screenshot asking two security questions because my device is unknown

It doesn’t count as proper “Something You Have”, if you can bootstrap any new device into “Something You Have” with some more “Something You Know”.

Auditing the Trump Campaign

When we opened our web form to allow people to make suggestions for open source projects that might benefit from a Secure Open Source audit, some joker submitted an entry as follows:

  • Project Name: Donald J. Trump for President
  • Project Website: https://www.donaldjtrump.com/
  • Project Description: Make America great again
  • What is the maintenance status of the project? Look at the polls, we are winning!
  • Has the project ever been audited before? Its under audit all the time, every year I get audited. Isn’t that unfair? My business friends never get audited.

Ha, ha. But it turns out it might have been a good idea to take the submission more seriously…

If you know of an open source project (as opposed to a presidential campaign) which meets our criteria and might benefit from a security audit, let us know.

On Trial

As many readers of this blog will know, I have cancer. I’ve had many operations over the last fifteen years, but a few years ago we decided that the spread was now wide enough that further surgery was not very pointful; we should instead wait for particular lesions to start causing problems, and only then treat them. (I have metastases in my lungs, liver, remaining kidney, leg, pleura and other places.)

Historically, chemotherapy hasn’t been an option for me. Broad spectrum chemotherapies work by killing anything growing fast; but my rather unusual cancer doesn’t grow fast (which is why I’ve lived as long as I have so far) and so they would kill me as quickly as they would kill it. And there are no targetted drugs for Adenoid Cystic Carcinoma, the rare salivary gland cancer I have.

However, recently my oncologist referred me to The Christie hospital in Manchester, which is doing some interesting research on cancer genetics. With them, I’m trying a few things, but the most immediate is that yesterday I entered a Phase 1 trial called AToM, which is trialling a couple of drugs in combination which may be able to help me.

The two drugs are an existing drug called olaparib, and a new one known only as AZD0156. Each of these drugs inhibits a different one of the seven or so mechanisms cells use to repair DNA after it’s been damaged. (Olaparib inhibits the PARP pathway; AZD0156 the ATM pathway.) Cells which realise they can’t repair themselves commit “cell suicide” (apoptosis). The theory is that these repair mechanisms are shakier in cancer cells than normal cells, and so cancer cells should be disproportionately affected (and so commit suicide more) if the mechanisms are inhibited.

As this is a Phase 1 trial, the goal is more about making sure the drug doesn’t kill people than about whether it works well, although the doses now being used are in the clinical range, and another patient with my cancer has seen some improvement. The trial document listed all sorts of possible side-effects, but the doctors say other patients are tolerating the combination well. Only experience will tell how it affects me. I’ll be on the drugs as long as I am seeing benefit (defined as “my cancer is not growing”). And, of course, hopefully there will be benefit to people in the future when and if this drug is approved for use.

In practical terms, the first three weeks of the trial are quite intensive in terms of the amount of hospital visits required (and I live 2 hours drive from Manchester), and the following six weeks moderately intensive, so I may be less responsive to email than normal. I also won’t be doing any international travel.