Speaking at FOSDEM on the Mozilla Root Program

Like every year for the past ten or more (except for a couple of years when my wife was due to have a baby), I’ll be going to FOSDEM, the premier European grass-roots FLOSS conference. This year, I’m speaking on the Policy and Legal Issues track, with the title “Reflections on Adjusting Trust: Tales of running an open and transparent Certificate Authority Program“. The talk is on Sunday at 12.40pm in the Legal and Policy Issues devroom (H.1301), and I’ll be talking about how we use the Mozilla root program to improve the state of security and encryption on the Internet, and the various CA misdemeanours we have found along the way. Hope to see you there :-)

Note that the Legal and Policy Issues devroom is usually scarily popular; arrive early if you want to get inside.

Introducing Deliberate Protocol Errors: Langley’s Law

Google have just published the draft spec for a protocol called Roughtime, which allows clients to determine the time to within the nearest 10 seconds or so without the need for an authoritative trusted timeserver. One part of their ecosystem document caught my eye – it’s like a small “chaos monkey” for protocols, where their server intentionally sends out a small subset of responses with various forms of protocol error:

A healthy software ecosystem doesn‘t arise by specifying how software should behave and then assuming that implementations will do the right thing. Rather we plan on having Roughtime servers return invalid, bogus answers to a small fraction of requests. These bogus answers would contain the wrong time, but would also be invalid in another way. For example, one of the signatures might be incorrect, or the tags in the message might be in the wrong order. Client implementations that don’t implement all the necessary checks would find that they get nonsense answers and, hopefully, that will be sufficient to expose bugs before they turn into a Blackhat talk.

The fascinating thing about this is that it’s a complete reversal of the ancient Postel’s Law regarding internet protocols:

Be conservative in what you send, be liberal in what you accept.

This behaviour instead requires implementations to be conservative in what they accept, otherwise they will get garbage data. And it also involves being, if not liberal, then certainly occasionally non-conforming in what they send.

Postel’s law has long been criticised for leading to interoperability issues – see HTML for an example of how accepting anything can be a nightmare, with the WHAT-WG having to come along and spec things much more tightly later. However, but simply reversing the second half to be conservative in what you accept doesn’t work well either – see XHTML/XML and the yellow screen of death for an example of a failure to solve the HTML problem that way. This type of change wouldn’t work in many protocols, but the particular design of this one, where you have to ask a number of different servers for their opinion, makes it possible. It will be interesting to see whether reversing Postel will lead to more interoperable software. Let’s call it “Langley’s Law”:

Be occasionally evil in what you send, and conservative in what you accept.

Google Safe Browsing Now Blocks “Deceptive Software”

From the Google Online Security blog:

Starting next week, we’ll be expanding Safe Browsing protection against additional kinds of deceptive software: programs disguised as a helpful download that actually make unexpected changes to your computer—for instance, switching your homepage or other browser settings to ones you don’t want.

I posted a comment asking:

How is it determined, and who determines, what software falls into this category and is therefore blocked?

However, this question has not been approved for publication, let alone answered :-( At Mozilla, we recognise exactly the behaviour this initiative is trying to stop, but without written criteria, transparency and accountability, this could easily devolve into “Chrome now blocks software Google doesn’t like.” Which would be concerning.

Firefox uses the Google Safe Browsing service but enhancements to it are not necessarily automatically reflected in the APIs we use, so I’m not certain whether or not Firefox would also be blocking software Google doesn’t like, and if it did, whether we would get some input into the list.

Someone else asked:

So this will block flash player downloads from https://get.adobe.com/de/flashplayer/ because it unexpectedly changed my default browser to Google Chrome?!

Kudos to Google for at least publishing that comment, but it also hasn’t been answered. Perhaps this change might signal a move by Google away from deals which sideload Chrome? That would be most welcome.

Awesome Article on Browsers

James Mickens on top form, on browsers, Web standards and JavaScript:

Automatically inserting semicolons into source code is like mishearing someone over a poor cell-phone connection, and then assuming that each of the dropped words should be replaced with the phrase “your mom.” This is a great way to create excitement in your interpersonal relationships, but it is not a good way to parse code.

Read more.

IE11, Certificates and Privacy

Microsoft recently announced that they were enhancing their “SmartScreen” system to send back to Microsoft every SSL certificate that every IE user encounters. They will use this information to try and detect SSL misissuances on their back end servers.

They may or may not be successful in doing that, but this implementation raises significant questions of privacy.

SmartScreen is a service to submit the full URLs you visited in IE (including query strings) to Microsoft for reputation testing and possible blocking. While Microsoft tries to reassure users by saying that this information passes to them over SSL, that doesn’t help much. It means an attacker with control of the network can’t see where you are browsing from this information – but if they have control of your network, they can see a lot about where you are browsing anyway. And Microsoft has full access to the data. The link to “our privacy statement” in the original SmartScreen announcement is, rather worryingly, broken. This is the current one, and it also tells us Each SmartScreen request comes with a unique identifier. That doesn’t contain any personal information, but it does allow Microsoft, or someone else with a subpoena, to reconstruct an IE user’s browsing history. The privacy policy also says nothing about whether Microsoft might use this information to e.g. find out what’s currently trending on the web. It seems they don’t need to provide a popular analytics service to get that sort of insight.

You might say that if you are already using SmartScreen, then sending the certificates as well doesn’t reveal much more information to Microsoft about your browsing than they already have. I’d say that’s not much comfort – but it’s also not quite true. SmartScreen does have a local whitelist for high traffic sites and so they don’t find out when you visit those sites. However (I assume), every certificate you encounter is sent to Microsoft, including high-traffic sites – as they are the most likely to be victims of misissuance. So Microsoft now know every site your browser visits, not just the less common ones.

By contrast, Firefox’s (and Chrome’s) implementation of the original function of SmartScreen, SafeBrowsing, uses a downloaded list of attack sites, so that the URLs you visit are not sent to Google or anyone else. And Certificate Transparency, the Google approach to detecting certificate misissuance after the fact which is now being standardized at the IETF, also does not violate the privacy of web users, because it does not require the browser to provide information to a third-party site. (Mozilla is currently evaluating CT.)

If I were someone who wanted to keep my privacy, I know which solution I’d prefer.

Uses of the Public Suffix List

For several years, Mozilla has maintained the Public Suffix List, a “map” of responsibilities within the DNS, as a service to the greater Internet community. We originally created it for browsers, but it has seen wider use in a surprising variety of places. There is now renewed interest in replacing it with something DNS-based and more robust. As a precursor to that work, I’m collecting a list of all the things the PSL is used for.

If you are a Mozilla hacker and know of somewhere we are using the PSL that isn’t listed, or if you know of uses of the PSL outside Mozilla, please add them.

Living Flash Free: Part 1

I’m trying to live Flash-free on my desktop. The first thing that didn’t work was Vimeo. I use Aurora, and so I set media.gstreamer.enabled to true, to turn on the gstreamer backend for the <video> tag. However, this still didn’t work. I tried installing more codec packs, but no luck. It turns out Ubuntu 13.10 comes with both gstreamer-1.0 and gstreamer-0.10, and Firefox only supports gstreamer-0.10. So I had to find the appropriate codec packs for 0.10 and install those also. Then, Vimeo worked (using H.264).

YouTube seems to work fine using WebM. :-) I do have the YouTube Flash to HTML5 addon installed so I don’t need to keep opting back in to the ‘trial’.

Living Flash Free

I’ve just got a new laptop, a Thinkpad X230 running Ubuntu 13.10, and I’m going to try living Flash-free. (I know, I know – James Dean, eat your heart out.) I know there are free software Flash implementations, including one from Mozilla, but I’d like to see how much stuff breaks if I don’t have any of it installed. I’ll blog from time to time about problems I encounter.

IE 11 Ignoring “autocomplete=off”

According to the IE blog Eric Lawrence’s blog, IE 11 has an “improved Password manager” which “keeps [the] user in control”. So far so good (here at Mozilla, we’re all in favour of user control :-), but it then goes on to say that one of the ways it does so is that it “ignores autocomplete=off”.

autocomplete=off is the way that pages give a “hint” to the browser as to what sort of form autocomplete behaviour they should provide. Ignoring it is, as I read the HTML5 spec, permitted, and one can see the superficial attractiveness of this. I’m sure we’ve all come across pages where the form fields won’t save even when we want them to.

However, we at Mozilla have never agreed to ignore this attribute across the entire web to “fix” this problem, because what we think would happen then (and what may happen with IE) is that sites implement non-standard workarounds. For some people, such as banks, stopping the browser storing authentication credentials is a business requirement – no argument. And if we don’t provide a standards-compliant way of doing it, they’ll use a non-standard one. For example, they might read the form fields out in an onsubmit() handler, then blank them, and submit the values in differently-named hidden form fields – so when the submit happens, the browser “sees” those fields as empty and doesn’t save anything. This is worse because it means the page requires JavaScript, but also because it’s much harder or impossible for particular individuals to disable such work-around mechanisms (e.g. those with accessibility needs which make filling in form fields much harder, and who want to make a different trade-off).

Ignoring autocomplete=”off” leads to an arms race, with users as the losers. So I hope Microsoft reconsider this move.

Web Standards Project Shuts; Not Paying Attention?

The WaSP has closed its doors, with a post titled “Our Work Here Is Done“:

Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality.

If by the web you mean “the desktop web”, then things are undeniably much better than they used to be. But what about the mobile web? Opera just shifted to WebKit precisely because the vision of Tim and the Web Standards Project is not a reality. Did they notice that happening?

They later go on to almost say the opposite:

The job’s not over, but instead of being the work of a small activist group, it’s a job for tens of thousands of developers who care about ensuring that the web remains a free, open, interoperable, and accessible competitor to native apps and closed eco-systems.

When was it not, in the end, up to developers? It’s always been up to the developers, and it was the case that the WaSP helped them. Seemingly no more.

I also saw this news on the same day that Lawrence Mandel posted a call for help with the numerous problems we are having due to people coding mobile websites which assume “Android”. That needs to change, and you can help. Is Mozilla now the flag bearer for web standards? Former WaSPers, join us and help out :-)

Investment Spam?

Today I received the following (company name changed to protect the guilty):

Hi Robert [sic],

Yoyodyne Partners is a technology buy-out fund managed by an experienced team of investors and entrepreneurs. Through committed capital and a network of strategic resources and investor relationships, Yoyodyne has the capability to build long-term value and growth for acquired businesses, thereby providing attractive exit opportunities for software company founders, shareholders and divestitures.

When it’s convenient for you, I would like to learn more about Mozilla Corporation. Please give me a call or send me an email to set it up.

Thank you,

Fred Flintstone, Partner
Yoyodyne Partners
Tech investors and Entrepreneurs
BigCity|AnotherBigCity|AThirdBigCityButStillNotSanFrancisco
555-123-1234
fflintstone@yoyodyne.com
www.yoyodyne.com

Are investors really so desperate to find companies that they’ve resorted to research-less spam? 5 minutes of research would be enough to understand why Mozilla Corporation is not available for sale…

MITM Boxes Reduce Network Security Even More Than They Are Designed To

It was recently discovered by the Tor project that a manufacturer of Man-In-The-Middle boxes with SSL interception capability, called Cyberoam, have been embedding the same root certificate in all of their boxes.

Background: SSL is not supposed to be interceptable. The only way to do it is for the intercepting box to be the endpoint of the SSL session and then, after inspecting the traffic, send the information over a different SSL session to the client. Now that we have explicitly banned trusted CAs from facilitating this after the Trustwave incident, the box should not be able to generate its own trusted-by-default certificate for the target site. Instead, it generates a cert which chains up to the box’s own embedded root. Therefore, any user of a network whose owners wish to use a such a box to inspect SSL traffic will have been asked to import whichever root certificate the box uses into their trusted root store, in order to avoid getting security warnings – the very warnings which would otherwise correctly tell you that your communications are being intercepted.

If each box uses a different root certificate, this is not a big problem. (Well, apart from the general issue of having to permit your employer or school to intercept your secure communications.) However, as noted above, Cyberoam uses the same root for all the boxes they manufacture. This root reuse means that sites who have tried to use Cyberoam boxes to punch a small hole in their security for ostensibly reasonable purposes have actually punched a rather larger one.

If you have trusted this root, your communications could potentially be silently intercepted by anyone who owned a Cyberoam box, not just the legitimate owners of the network you were using. This would be true whether you were on that network, or elsewhere (e.g. if you went to another location with your phone or laptop). Furthermore, anyone who purchases a Cyberoam box can try and extract the root (they may have physical security in place, but that’s just a speedbump) and then they don’t even need a Cyberoam box to MITM you.

From reading their online docs, this problem seems to also occur with similar devices from Sonicwall (PDF; page 2) and Fortigate. (Thanks to a commenter on the Tor blog for noticing this.) I suspect that many vendors use this insecure configuration by default.

The Cyberoam default root certificate is not trusted by the Mozilla root store – Cyberoam is not a CA – and we do not plan to take action at this time. However, this is another important lesson in the unintended consequences of intentionally breaking the Internet’s security model. Messing with the Internet security infrastructure breaks things, in unexpected and risky ways. Don’t do it.