The Self-Delusion of Evil

People who demonstrate evil see the world as they want to see it rather than how it actually is. To maintain their version of reality, they must scapegoat others and project their own faults onto them. They must attack any and all who jeopardize their self image. All of this means that those who demonstrate evil are entirely incapable of true empathy and can be utterly destructive in their relationship with others in the name of self-preservation.

Christie Koehler (summarising part of the message of People of the Lie, by M. Scott Peck)

Accepting Zimbra Meeting Invitations

I use Google Calendar, and am very happy with it (I can share it with my wife and vice versa, which is helpful). I use Thunderbird, and use the excellent “Google Calendar Tab” extension. All my Mozilla mail forwards to my own mailserver, and is not stored at Mozilla.

I regularly get meeting invites from Mozilla employees, which are sent out using Mozilla’s Zimbra installation. They come as a plain text summary plus an attached .ics file. How do I make the meeting owner happy by indicating that I am attending?

There’s no link in the email to click to say “I’m coming”. I can’t log into Zimbra, find the message and click “Attending”, because copies of my email are not stored at Mozilla. I could change that, but it seems like a sledgehammer to crack a nut, and anyway, it would be a pain to have to log in and find the mail. Also, it should be a regular occurrence that people not using Mozilla’s Zimbra get invited to Mozilla meetings – what do they do?

I don’t want to switch to using Lightning, and I’m not sure it would solve the problem even if I did.

Is this even possible? Does anyone know?

As a bonus, it would be awesome if I could double-click on the .ics attachment and have it passed to Google Calendar and added to my calendar. Do we yet have the technology for that sort of thing?

It’s even worse when receiving invites from Exchange, which does happen occasionally when I’m on a call run by Microsoft people. That _just_ comes as a icalendar file, and the date and time are not in the plain text, so I have to base64-decode it manually and then read the source and do the timezone maths to work out when the meeting is! I think Lightning handles these better, but as I said, I don’t want to use Lightning…

Bugzilla API 1.1 Released

I am proud to announce the release of version 1.1 of the Bugzilla REST API. This release has performance improvements to reduce the load on Bugzilla, and other things helpful to Bugzilla admins.

I notice a lot of clients are still using BzAPI 0.9; this will go away in 2 weeks, on 19th March. There is a limit to the number of old versions we can support on the server, particularly as the older ones put a larger load on Bugzilla. Please use either the /1.1 or the /latest endpoints.

File bugs | Feedback and discussion

The Pregnancy Predictor

Does it really matter that companies, both online and in real life, profile you based on your purchasing and surfing habits? After all, it means you get ads and offers more targetted to you, and that can only be a good thing, right?

An angry man went into a Target outside of Minneapolis, demanding to talk to a manager:

“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.

On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

Footnote: Target’s revenues have grown from $44 billion in 2002 to $67 billion in 2010. Company president Gregg Steinhafel has boasted to investors about the company’s “heightened focus on items and categories that appeal to specific guest segments such as mom and baby.”

Official MITM Mode in Firefox?

I had a mad idea last week, which I shared with the NSS team. The fact is that some companies want to monitor everything going into and out of their network. And, my view is, as it’s their network, it’s their right legally, and it’s OK with me morally too as long as everyone using the network is aware of it.

However, the current SSL trust model makes this MITMing of all connections very difficult (which is a good thing, in many ways). Companies such as BlueCoat sell boxes which will MITM SSL connections and log the data, but browsers will complain that the auto-generated certs presented are not trusted. Companies are supposed to deploy their own root to all endpoints – but this is a massive administrative hassle, particularly for mobile devices. As we have found out anew recently, this creates an incentive for trusted CAs to sell trusted intermediate certificates to these big companies. However, such certificates could potentially be abused to silently MITM anyone.

So my mad idea was that Firefox should have one cert in the root store for which the private key was published. However, when an SSL connection occurred which chained up to that root, the browser would bring up an irremovable red infobar which said: “Your connection is not private – all data transferred is being monitored by X”, where X was the O field from the intermediate cert being used. (We would require the use of exactly one intermediate.) If the O field was empty, it would say “by Unknown Attackers”, or something equally scary.

This week I found Phillip Hallam-Baker of Comodo proposing something very similar on the “therightkey” mailing list:

What I find wrong with the MITM proxies is that they offer a
completely transparent mechanism. The user is not notified that they
are being logged. I think that is a broken approach because the whole
point of accountability controls is that people behave differently
when they know they are being watched.

I don’t mean just changing the color of the address bar either. I
would want to see something like the following:

0) The intercept capability is turned on in the browser, this would be
done using a separate tool and lock the browser to a specific
intercept cert root.

1) User attempts to connect to https://www.example.com
2) Browser throws up splash screen for 5secs stating ‘Your connection
has been intercepted’
3) Business as usual.

The splash screen would appear once per session with a new host and
reset periodically.

It should show the interception cert being used as well.

Phil’s point 0 rather defeats the point – if you had to reconfigure the browser, then companies would just add their own root. But if it were built in by default, his point 0 is not necessary. He is right that you’d need a splash screen or confirmation step – we can’t sent initial data or cookies or anything until we know the user knows they are being MITMed, and gives permission to continue.

What do people think?

Marketing

Although most open source developers would probably hate to admit it, marketing works. A good marketing campaign can create buzz around an open source product, even to the point where hardheaded coders find themselves having vaguely positive thoughts about the software for reasons they can’t quite put their finger on. It is not my place here to dissect the arms-race dynamics of marketing in general. Any corporation involved in free software will eventually find itself considering how to market themselves, the software, or their relationship to the software.

— Karl Fogel, Producing Open Source Software

Establishing Trust Online

No, this post is not about certificates :-))

We want to make Mozilla as open a project as possible, which means that ideally there would be no parts of what we do which were closed to input from particular sections of the community. Question: how does Mozilla acquire sufficient trust in a potential community member that we could let them work in sensitive areas? Sensitive areas might include ones where they were working with confidential data belonging to users or employees, or working with partners under NDA, either temporary or permanent. We would not want someone untrustworthy in such a position.

Here is a (probably incomplete) list of ways to establish trust between a truster and a trust-ee:

  • A) Recommendation from 3rd party already trusted by truster
  • B) Trust-ee putting something at risk (deposit)
  • C) Legal contract with penalties
  • D) Demonstration of bona fides (e.g. by being faithful in small things)
  • E) Gut instinct
  • F) Trust-ee revealing verified identity information
  • G) Default to trust; remove trust if trust broken

When Mozilla employs someone, we have sufficient trust in them because of B) (their job is the thing at risk if they violate trust), C), F) and perhaps a little of A). How do we go about establishing similar trust with someone we don’t employ?

Here are some comments on each:

  • A) doesn’t scale well to a globally-distributed organization, where we regularly get new people who know no Mozillians in real life.
  • B) This is a difficult thing to ask of new community members. What options are there? Money? Something else?
  • C) IT went for this one, but it might be too heavyweight for some. (Of course, it might be required by law in some cases.)
  • D) This is how things work normally; we are looking for a way to speed this process up.
  • E) This works right up until it doesn’t…
  • F) We could investigate this; obtaining such identity proof might involve a time and/or money cost for the contributor.
  • G) Possible in some circumstances, but not the difficult ones. Perhaps involves an overly-rosy view of human nature.

Thoughts and further comments?

Gerv

Mozilla Projects and GPLed Code

I had two emails about this yesterday, so I thought I’d stick a post on Planet to clarify.

The short version: GPLed code may not be included in software that Mozilla ships.

The long version:

Three major goals of the Mozilla licensing policy are:

  1. legal simplicity for downstream users;
  2. the ability for as many people as possible to use our code in their products; and
  3. striking a balance between maintaining copyleft for our code, and the right of those downstream users to combine our code with proprietary code.

The practical result of this is that we act to preserve the right of anyone taking all or part of a codebase from us to use it under the terms of the MPL (currently mostly version 1.1, soon to be version 2), or in projects under the LGPL or the GPL, at their option. That means that all the constituent parts have to be under either the MPL 1.1 tri-license, or the MPL 2 without the incompatibility clause, or a simpler subset of those terms, such as Apache, BSD or MIT.

If any part was solely under the GPL, then people wanting to use MPL terms for the entire application could not do so – only GPL terms would be available, unless the GPL-only part was removed. It would also no longer be possible to release binary versions of e.g. Firefox under the MPL, as Mozilla does today.

If any part was solely under the LGPL, then people would not be able to use MPL terms for that part. This is possibly less of a problem, particularly if the software is a clearly-defined and self-contained library. There is some talk of changing the policy to permit LGPLed libraries to be included, but it’s at an early stage and no decision has been made yet. We would need to do some legal work to find out what the additional compliance requirements were on us and on users of our code, and whether it was possible to clearly identify what was part of the LGPLed library and what was not (both in general and in any particular case).

If an author of some code has placed it under the GPL, then that means that they want it to be used only in GPLed projects (and Mozilla projects are not GPLed projects). We need to respect that licensing decision that the author has made. There is a load of code out there which is unavailable for use by us at Mozilla. The fact that you can read the source, or use it in other projects doesn’t change that.

The release of the new MPL 2 does not change any of this.

So, if you want to use some code which is under the GPL in a Mozilla project codebase, you have a few options:

  • You can ask the licensing team to contact the author or authors of the code and ask them to make it available under the MPL or another licence which fits with
    our licensing policy. Software authors are often receptive to polite requests of this type. We have done this before for large pieces of code, e.g. cairo.

  • You can find another library implementing the same functionality. For many standard libraries, there is a GPL version and a BSD version.
  • You can rewrite the code (just as you would have had to do if you hadn’t found the library or if it had been proprietary or otherwise not open source).

Note also that I talk about “code that Mozilla ships”. We have a small number of internal tools, such as build tools, which were sourced from elsewhere and are GPLed. So finding GPLed code in a Mozilla repo is not necessarily a violation of this policy. However, “software that we ship” includes anything we create ourselves as an open source project, even if it’s primarily for our own internal use (e.g. the AMO codebase). And this policy applies to all code being created under the auspices of the Mozilla project, whether the code is stored in a Mozilla-hosted repo or somewhere else, like GitHub.

Approaches to Malware in Software Ecosystems

Today, any software ecosystem, whether it’s software for an OS, addons for a browser, or apps for a phone, has to consider the possibility of malware.

If one wants to deal with the problem of malware, I see the following solutions:

  1. Have a single point of software distribution for your platform that you control, e.g. the Apple App Store for iOS. This does not entirely eliminate the possibility of malware, but does make it less likely, and does enable you to take quick action when it’s discovered. Depending on the policies and entry requirements of such a distribution point, of course, this may lead to unacceptable restrictions on user or developer freedom.
  2. Have software which attempts to detect and stop malware, e.g. virus scanners. This puts you into an arms race with the malware authors, who come up with amazing things like self-modifying polymorphic code. And they will write code to disable or work around your code. There are a lot more of them than you, and they can make a lot of money if they succeed.
  3. Have a reputation system, e.g. by requiring all code to be signed by the developer or a reviewer, and have a blacklist of “bad developers” or bad/stolen signing keys. This gives developers a key management problem, and also potentially the expense and hassle of obtaining a identifying certificate.
  4. Rely on your users to be smart, and hope that people don’t fall for the enticements to download the malware. This approach is the one taken by Ubuntu – even though anyone could make a .deb of malware and start to promote it via spam or compromised websites, it’s very rare. I suggest that this is due to the smaller and better-informed market which uses Linux, and perhaps a diversity of processors meaning that compiled code doesn’t run everywhere.

Unless I’ve missed one (tell me :-), any other solution will be a hybrid of components of the above. For example, on Android, there is by default a single point of software distribution for a given Android device (the Android Market or another vendor’s market), but you can check a preference to allow the installation of non-Market applications and, when you do, it’s up to you to avoid installing something malicious. (Let’s leave CarrierIQ out of the discussion for now!) So that’s a hybrid of 1 and 4.

Question for discussion: which solution is best for Firefox add-ons?

Currently, although there is a website for downloading addons, we allow non-AMO addons with just an “are you sure” prompt to prevent nagware. We do have the capability for code signing, but no-one uses it, and no-one notices whether anyone is using it, because there are no significant penalties for not using it. So it seems to me like we are effectively using solution 4.

Voting Resolves Very Little

The hardest thing about voting is determining when to do it. In general, taking a vote should be very rare—a last resort for when all other options have failed. Don’t think of voting as a great way to resolve debates. It isn’t. It ends discussion, and thereby ends creative thinking about the problem. As long as discussion continues, there is the possibility that someone will come up with a new solution everyone likes. This happens surprisingly often: a lively debate can produce a new way of thinking about the problem, and lead to a proposal that eventually satisfies everyone. Even when no new proposal arises, it’s still usually better to broker a compromise than to hold a vote. After a compromise, everyone is a little bit unhappy, whereas after a vote, some people are unhappy while others are happy. From a political standpoint, the former situation is preferable: at least each person can feel he extracted a price for his unhappiness. He may be dissatisfied, but so is everyone else.

— Karl Fogel, Producing Open Source Software

Distributed Working: How To Make It Better

Over the summer, an embedded anthropologist called Claire Rudolph, from the University of California at Berkeley, attended many Mozilla meetings and interviewed 15 Mozilla employees and 2 volunteers. She and her supervisor then did a 30-minute presentation of their work, which includes suggestions for making distributed work more effective and inclusive. (Thanks to Atul Varma for sorting out the publishing of this video.) Recommended viewing for anyone who is remote or who works with remoties (i.e. everyone).

API Prefixing: Making a Data-Driven Decision – Help Wanted

Background: API prefixing is when we give new DOM functions a name beginning with “moz” to show they are experimental, e.g. mozDrawText on <canvas>. There is some discussion as to whether this is actually a helpful thing to do. We want to get some data to help us decide.

What we need is a list of the finished APIs we’ve added in the past (say) 3 years, the date (A) they went in prefixed, the date (C) they were unprefixed, and then (and this is the trick) the date (B) we could have unprefixed them if we had perfect vision of the future – i.e. when did the last incompatible change go in.

The question then is: is B usually close to A, close to C, or does it range about from API to API?

We’d love someone to gather data on this, to help us make a decision on whether API prefixing is a good idea. To do this work, you just need to be able to use Bugzilla, read English, and understand the concepts in this message. If you have some time, drop me an email and I’ll help you get started.

Oops…

Note to self: when you have two posts open in WordPress, one that you want to post and one which is a scratchpad of content for future posts and not for publication, check carefully before pressing “Publish”.

A Level Playing Field

Subversion was started in 2000 by CollabNet, which has been the project’s primary funder since its inception, paying the salaries of several developers. Soon after the project began, we hired another developer, Mike Pilato, to join the effort. By then, coding had already started. Although Subversion was still very much in the early stages, it already had a development community with a set of basic ground rules.

Mike’s arrival raised an interesting question. Subversion already had a policy about how a new developer gets commit access. First, he submits some patches to the development mailing list. After enough patches have gone by for the other committers to see that the new contributor knows what he’s doing, someone proposes that he just commit directly. Assuming the committers agree, one of them mails the new developer and offers him direct commit access to the project’s repository.

CollabNet had hired Mike specifically to work on Subversion. Among those who already knew him, there was no doubt about his coding skills or his readiness to work on the project. Furthermore, the volunteer developers had a very good relationship with the CollabNet employees, and most likely would not have objected if we’d just given Mike commit access the day he was hired. But we knew we’d be setting a precedent. If we granted Mike commit access by fiat, we’d be saying that CollabNet had the right to ignore project guidelines, simply because it was the primary funder. While the damage from this would not necessarily be immediately apparent, it would gradually result in the non-salaried developers feeling disenfranchised. Other people have to earn their commit access—CollabNet just buys it.

So Mike agreed to start out his employment at CollabNet like any other volunteer developer, without commit access. He sent patches to the public mailing list, where they could be, and were, reviewed by everyone. We also said on the list that we were doing things this way deliberately, so there could be no missing the point. After a couple of weeks of solid activity by Mike, someone (I can’t remember if it was a CollabNet developer or not) proposed him for commit access, and he was accepted, as we knew he would be.

That kind of consistency gets you a credibility that money could never buy. And credibility is a valuable currency to have in technical discussions: it’s immunization against having one’s motives questioned later. In the heat of argument, people will sometimes look for non-technical ways to win the battle. The project’s primary funder, because of its deep involvement and obvious concern over the directions the project takes, presents a wider target than most. By being scrupulous to observe all project guidelines right from the start, the funder makes itself the same size as everyone else.

— Karl Fogel, Producing Open Source Software