This might appear as a stretched analogy, but it struck as me as obvious when I tried playing the Lego Racers boardgame with my 3-year old this weekend. The game is ranked pretty low on Boardgamegeek, and deservedly so. The promise and premise is great: use Lego cars to race around a track and pick up new pieces to modify the powers of your car… sounds like great fun. Right? But it is not, and that’s where my analogy with the age of software comes in.
Lego Racers is a very buggy game. It takes almost no playing to get to a situation that is not covered by the rules, which would seem to indicate that play testing was not part of the design process. It seems that the designers made the same mistake as many programmers do: explore the obvious and primary path of execution, without thinking about what could go differently or go wrong. For something as simple as this games, that is simply sloppy. For something as complex as say, an operating system or telephone switch, it is more understandable.
This is where the age of software comes in: the more a particular piece of software has been used, the more different cases will have been explored, and the more errors, mistakes, and simple design holes will have been fixed. Steve Gibson at grc.com and Leo Laporte’s Security Now podcast often says that a completely new piece of software cannot be said to be secure, since there is no evidence to support that. It might well have been developed, in principle, with lots of security in mind. But until proven in the real world with real adversaries, there is no support for calling it secure.
It is also the case that no amount of internal testing can provide full coverage for all the cases that will appear in actual use at real customer and user sites. It is a matter of volume, but also a matter of sheer inspiration and creativity. Someone with a real problem to solve will use the tools they have in any way they can imagine… and your own developers cannot be expected have the same imagination as a user population many times their own size. That’s why beta testing and customer early access and iterative development are so important: only then will all the possible ways of using something be explored. It often turns out that users feel that your software can do something that you never quite thought it would — and that you sometimes have to insert specific limitations into the documention saying that “sorry, you cannot do that (for some not initially obvious but deep technical reason)”.
It also puts an interesting perspective on new creative software. Any new software entering a market will not have support for all possible users and all possible use cases. If there is an established older piece of software in the same domain, the new software will tend to solve fewer problems with fewer odd boundary conditions. The new software will typically be designed to solve some part of the problem better (or cheaper) than existing software (otherwise, its existence is hard to motivate), but initially it will not have the breadth and depth of coverage that a decade-old package will have. Simply from having been subject to users and their creativity and requirements for a long time.
That sounds like an awfully academic argument. One concrete example: the Linux operating system is now catching up to the old heavy-weights like Solaris and Aix in terms of scalability and robustness and features. Solaris still seems to scale better to really large number of cores and processes, but compared to where Linux used to be in the pre-2.6 kernel versions the situation is vastly improved. But doing multiprocessing like that well simply seems to take calendar time. More users does not help. You need the grind of having to transition through a few different generations of hardware of different types, and with different trends being judged important. Similary, the real-time operating systems now becoming SMP-aware will not scale as well as Linux does, at least not when judged from shared-memory flat designs. They do have their areas of merit, but they simply will not accumulate the same kind of shared-memory experience until a few years have passed. On the other hand, for the domains where predictability, control, and performance count, they are far superior to the general-purpose operating systems like Linux and Solaris, since they have been accumulating far more experience there. The same fact is evident in military history: time upon time, history shows how “seasoned troops” have a quality that no amount of quantity or training quality of fresh troops can match. Troops and equipment have to prove themselves in real battles with real enemies before their true quality can be assessed and their full potential realized.
To sum up, it seems to me that while cool new software is exciting to write and exciting to use, in heavy-duty real-world use, you want seasoned well-aged software that has been proven and tested in a wide variety of real-world caes over a significant period of time. Nothing beats experience, at least as long as the software system is maintained well so that it evolves in a way that keeps it open to future evolution. It does happen that old software gets worse with age… but there are many examples that are like fine wines and just tend to get better with time.
As a final aside, we also own some other Lego-branded children’s games, and they, while not the most complex games in existence, at least are consistent and work without a problem. So the Lego brand itself needs not be avoided.