Six months ago, the Google Summer of Code finished up, and the ten Mozilla-related projects submitted their final reports. I’ve just spent some time looking into the current status of all ten projects, and thought I would share those results with you.
The ten projects were:
- BugXula (Bugzilla XUL front end)
- Cockatoo (Thunderbird SIP client)
- EventLogger (test framework)
- Hindi l10n
- Firepuddle (Bittorrent for Firefox)
- Latvian l10n
- MultiExI (install multiple extensions)
- Muzzled (graphical theme builder for XUL)
- Thai l10n
- Vietnamese l10n
My original plan was to have a table highlighting the similarities and differences in current status, but it’s hardly worth it. None of the ten projects shows any sign whatsoever of work having continued after the SoC deadline. Not one. No development-related mailing list traffic, no releases, nothing. It’s as if the people vanished off the face of the earth on September 2nd.
Of the ten projects, as far as I can tell only one (Firefox in Hindi) installs in the latest stable build of the relevant product, and that’s not because it’s been updated since the release. I suspect it’s merely due to a loose XPI maxVersion setting.
The four localisation projects may have some excuse; unfortunately, it turned out that the evaluators chose four projects which were all duplicates of existing nascent l10n efforts. However, the SoCcers concerned were switched to translating other products – and those translations have not been maintained either. In one case (Hindi), the original group has very recently acquired CVS checkin rights, but for the other three, neither team seems to have made progress towards becoming an official, reliable source of a localisation in that language.
I read recently that Venture Capital firms evaluate the team behind a company much more closely than they evaluate their product idea. As it happens, as far as I know, none of the SoCcers chosen were well-known in the Mozilla community. (Of course, I don’t know how many well-known community members applied.) This may be relevant; we’d probably need data from other projects to know for certain. But perhaps the SoC form should contain a “why should we pick you” section as well as a “why should we pick your project” section.
A couple of the projects (“Bittorrent client for Firefox”, “Graphical theme builder for XUL”) were far too large for a single person working in their spare time over the summer. So it’s unsurprising that the SoCcers either got discouraged, or produced incomplete work. Evaluators have a responsibility to size projects, and assess whether they are suitable for the SoC scheme. Not every idea is.
I believe that the money was paid to all the SoCcers except for one; that person did not produce any code at all. All the people chosen wrote a final report. Several of them give roadmaps and future plans for their product, and promise that work will continue after the project is finished. However, in every case, this has not happened – for whatever reason (I believe, for example, that one project owner became a father). Still, it’s interesting that once the money is paid, the drive seems to evaporate.
The difficulty for the mentors is that Google discouraged partial payments – it was all or nothing. So if the person had done some work, they were left with the difficult decision of over- or under-rewarding the person. I suspect that, in several cases, they erred on the side of generosity – after all, it wasn’t their money they were spending! Which perhaps is fair enough, given the setup.
I don’t want this post to be seen as bashing either SoCcers or mentors; they were offered a deal, and they took it. That’s fair enough. After all, I was a mentor – and there was nothing to force the SoCcers to work on after the period had finished. But if we want the Summer of Code to be useful in the future, it’s reasonable to look at our experience and their behaviour, and see if the goals were met and, if not, how to change things.
So what can we conclude? Here are some suggestions:
- Evaluators need to be very careful about which projects they approve, and carefully consider their relationship to other ongoing efforts.
- Evaluators should evaluate people as much as, or more than, projects, and give preference to people who are already committed to being part of the community.
- Evaluators should be careful not too choose projects which are too large, ill-defined or overly subjective.
- Mentors should be permitted to pay out partial amounts. This gives SoCcers a much greater incentive to do a good job.
- Mentors must be ruthless about paying out in proportion to the amount of available, working, useful and well-documented code.
- The success of the SoC needs to be measured over the long term, not just at the end of the coding period.
If anyone reads this who was involved in running the SoC for other projects, I would be interested to see similar analyses from there.