I just read a fairly interesting book about the British Spitfire fighter plane of World War 2. The war bits were fairly boring, actually, but the development story was all the more interesting. I find it fascinating to read about how aviation engineers in the 1930s experiment and guess their way from the slow unwiedly biplanes of World War 1 and the 1920s to the sleek very fast aircraft of 1940 and beyond. It is a story that also has something tell us about contemporary software development and optimization.
The Spitfire development starts with an excellent basic architecture for the aircraft, including the wing shape, the long nose, and the low pilot position. This offered a good basic design that lasted until the early jet era, and that was still competitive in 1945 (with lots of upgrades to engines and armament and equipment along the way). What is truly fascinating is the detail work that went into turning that good design into a practical fighter aircraft.
Especially the performance and measured in terms of simple top speed. Here, the engineers fought a constant battle between the requirements of armament and electronics and engines and the need for as clean and streamlined an outline as possible. It was a constant attention to little details, adding things, testing the their effect, redesigning or scrapping feature. It is very similar to how we develop software today, where adding features might help some users — but often at the price of more complexity, longer critical paths, and lower absolute performance.
For example, early prototypes had a simple skid instead of a full tail wheel, as that skid was more streamlined and cheaper to build. But flying from a hard-surface airfield required a wheel, and in the end the Spitfire had to use a retractable rear wheel, which was not in the original design (RAF had decided to upgrade their airfields, and this requirement was introduced fairly late in the process). It cost some weight and complexity, but increased top speed by several miles per hour. Once again, we see the same kind of pattern in software development: you can gain performance at the cost of complexity somewhere else. Advanced optimizations tend to rely on quite complicated techniques to make the common case fast, where simpler implementations feature lower performance but shorter development time.
Simulation was also used in a very clever way. In one series of experiments, the question was asked whether simpler cheaper rivets with domed heads could be used instead of complicated flush rivets. To check this, they glued peahalves to a prototype aircraft to simulate various configurations, and in the end concluded that it was fine to have domed rivets on most of the body, but that the wings absolutely required flush rivets. Very ingenious experimentation I think. And a story that should be familiar to anyone who has done some optimization work on real-world software: some things that seem “necessary and right” actually do not have the expected benefit for the cost (flush rivets on the body), while others are crucial (flush rivets on the wings). You typically do not know until you have tried. Just guessing is usually a bad guide (I just read an article at Embedded.com about the misguided attempt to establish an “Embedded C++ subset” in the mid-1990s that is a perfect example of this).
Other examples of the battle with speed was that adding radio aerials reduced speed by a few mph, as did the addition of extra cooler air intakes for stronger engines. On the other hand, a stronger engine also increased speed, so it was a good tradeoff in the end. There was a short-lived little air intake to provide driving air flow to cockpit electronics that cost a few mph and was promptly removed. It is actually quite fascinating to see in all aircraft of this era how little bumps and protrusions can have a significant impact on speed and performance. It was a case of “death by a thousand cuts”, as each little feature by itself can seem insignificant, but the total effect is dramatic. Here we also see a modern analogy in software optimization: while undergraduate courses in software teach you to identify “the big bottleneck” and “use a better algorithm”, most real-world software has no primary bottleneck. And here, just improving little things all over the place will have a pretty major aggregate impact. The Twit Podcast #167 has a discussion on how this is the case for Windows 7, where Microsoft has made big strides in performance by a lot of small improvements.
Thus, for software, you can also get “life by a thousand cuts”, by cutting out a thousand little pieces of overhead you can make your software way more lively.
In my world of computer simulators and virtualization solutions, this is a very familiar scenario. There are sound basic architectures (and less so), and for each type of architecture, the quality of implementation can make a marked difference in performance (and stability). I recently published a white paper on some of these aspects for Simics, which I think is a good example of a Spitfire-style design with a good basic architecture and lots of detail work to really make performance shine.