Is Open Source Too Open for its Own Good?
by Glyn Moody
While I was at linux.conf.au 2010 last month, I finally met Ted Ts'o, one of the most senior figures in the Linux world, and, like many of them, now working for Google. Indeed, few people go further back in the world of Linux than Ts'o: he made his first contribution to version 0.10 of the kernel, which came out in December 1991, and he also set up the first site in the US that carried the Linux kernel and related software.
Before working on Linux, Ts'o had tried out Minix, but found it unsatisfactory. As he told me when I interviewed him ten years ago:
I had looked at Minix, and was not impressed. It was extremely cumbersome - it was a teaching OS, that much was obvious. It was not designed to necessarily be clean, it was there to demonstrate how you might do a microkernel, message-passing-like system, but it never actually had any of the advantages that you might have of a message-passing system in terms of it being completely single threaded. Professor Tanenbaum had absolutely no interest in getting it to work on the 386, which meant that the patches were available, but the patches couldn't be distributed with the core Minix system because of the copyright issue. I just didn't consider it a particularly satisfactory system.
Linus, by contrast, welcomed contributions:
He usually just said something like, yeah, that's a good patch, I'll be accepting it, and I would get to see the patches integrated in the next release, in fairly short order.
This ready acceptance of patches was, of course, a key element of Linux's success. But to make the most of those patches, a system had to be developed to manage them, and here Ts'o played an important role in its evolution:
What gradually started happening was more people were sending bug reports or requests for help as opposed to actual patches. And I would just simply be the one to handle the replies. This all happened on a public mailing list, so people would see that when so and so asked for help, I would be the one who sent in the reply with the bug fix and patch. And so what generally ended up happening was other people would start deferring things to me, and start sending patches to me because they knew I was working actively in that area anyway.
After a certain point, people would send patches to Linus for say the serial driver, or the tty area, and he would send the patch to me and say: what do you think of it?
Thus were born the “trusted lieutenants”, the people to whom Linus was happy to delegate decision-making about particular areas. That was important, because it meant that the Linux development process could scale beyond what Linus himself could personally supervise.
As it happens, when I met Ts'o last month, we talked about precisely this issue of trust. Prefacing his comments with the standard “I'm not speaking for Google” that all Googlers seem programmed to utter before casual conversation, Ts'o reflected on the recent computer break-in at Google, and the fact that some suggested it had been down to backdoors in code.
Whether or not that was the case, he pointed out that there was a growing danger that open source might become a tempting vector for such attacks as it gradually becomes more widely deployed, especially among governments and global enterprises. The fact that anyone, anywhere, could, in theory, provide patches, makes this more problematic.
Hitherto, there has been an unspoken faith that people submitting patches can be trusted because they are generally known, and have a track record, just as Ts'o did back in the early days of Linux. But as the number of patches increases, and they come from more and more contributors about whom less and less is known, so the risk that they contain undeclared extra features that third parties might find useful at some later date also increases.
This issue of trust and its breach is not a purely theoretical problem. The recent scare about infected Firefox add-ons showed that it's simply not possible to catch everything, even fairly obvious stuff. If even infected add-ons can slip through, imagine how much harder it will be spotting subtle backdoors in otherwise useful patches.
And there's another major problem that threatens to destabilise the system of trust that underlies the open source development model in a different way. What happens when some of the authorities that are supposed to vouch for third parties are themselves possibly suspect? That, again, is a situation that has already cropped up:
It seems that, at the end of October, Mozilla approved the addition of the China Internet Network Information Center (CNNIC) as a root certification authority, meaning that Firefox will accept CNNIC-signed certificates as valid and fully trusted. CNNIC is said to be controlled by the Chinese government and is alleged to be heavily involved in spying on Chinese citizens; numerous people are concerned that it will use its root CA position to facilitate man-in-the-middle attacks. Unfortunately, most of these concerns were not raised during the discussion period, making the removal of CNNIC - if warranted – harder.
Now, as the comments to that post point out, this may be an unfair characterisation of the certification authority involved, but even if it's not true in this case, it may well be true in a future case, and is certainly an issue that needs to be addressed. Of course, this isn't a problem faced by open source alone, but it makes the whole area of trust even trickier to navigate.
So what should be done? Is it inevitable that trapdoors will be (or maybe even already are) hidden away in free software? Do we need formal systems for vetting people who contribute patches? Wouldn't such systems destroy a key strength of open source? Is open source doomed to be betrayed by its own openness?