All communities develop rituals over time. One of the enduring linux-kernel rituals is the regular heated discussion on development processes and kernel quality. To an outside observer, these events can give the impression that the whole enterprise is about to come crashing down. But the reality is a lot like the New Year celebrations the author was privileged enough to see in Beijing: vast amounts of smoke and noise, but everybody gets back to work as usual the next day.
Beyond that, though, discussions of this nature have real value. Any group which is concerned about issues like quality must, on occasion, take a step back and evaluate the situation. Even if there are no immediate outcomes, the ideas raised often reverberate over the following months, sometimes leading to real improvements.
The immediate inspiration for this round of discussion was broken systems resulting from the 2.6.26 merge window. This development cycle has had a rougher start than some, with more than the usual number of patches causing boot failures and other sorts of inconvenient behavior. That led to some back-and-forth between developers on how patches should be handled. Broken patches are unfortunate, but one thing is worth noting here: these problems were caught and fixed even before the 2.6.26-rc1 kernel release was made. The problems which set off this round of discussion are not bugs which will affect Linux users.
But, beyond any doubt, there will be other bugs which are slower to surface and slower to be fixed. The number of these bugs has led to a number of calls to slow down the development process in one way or another. To that end, it is worth noting that the process has slowed down somewhat, with the 2.6.26 merge window bringing in far fewer changesets than were seen for 2.6.24 or 2.6.25. Whether this slower pace will continue into future development cycles, or whether it's simply a lull after two exceptionally busy cycles remains to be seen.
But, if the process does not slow down on its own, there are developers who would like to find a way to force it to happen. Some have argued for simply throttling the process by, for example, limiting new features in each development cycle to specific subsystems of the kernel. There has also been talk of picking the subsystems with the worst regression counts and excluding new features from those subsystems until things improve. The fact of the matter, though, is that throttling is unlikely to help the situation.
Slowing down merging does not keep developers from developing, it just keeps their code out of the tree. An extreme example can be found in the 2.4 kernel: the merging of new code was heavily throttled for a long time. What happened was that the distributors started merging new developments themselves because their users were demanding them. So a lot of kernels which went under the name "2.4" were far removed from anything which could be downloaded from kernel.org. That way lies fragmentation - and almost certainly lower quality as well.
Linus actually takes this argument further by arguing that quickly merging patches leads to better quality:
[M]y personal belief is that the best way to raise quality of code is to distribute it. Yes, as patches for discussion, but even more so as a part of a cohesive whole - as _merged_ patches! The thing is, the quality of individual patches isn't what matters! What matters is the quality of the end result. And people are going to be a lot more involved in looking at, testing, and working with code that is merged, rather than code that isn't.