I recently read the classic book The Soul of a New Machine by Tracy Kidder. Even though it describes the project to build a machine that was launched more than 30 years ago, the story is still fresh and familiar. Corporate intrigue, managing difficult people, clever engineering, high pressure, all familiar ingredients in computing today just as it was back then. With my interesting in computer history and simulation, I was delighted to actually find a simulator in the story too! It was a cycle-accurate simulator of the design, programmed in 1979.
First, the background. The book describes the project that designed Data General’s first commercial 32-bit “supermini” computer, the Eclipse MV/8000. It was almost a skunk-works project, and was initially considered as “insurance” against the failure to develop the “serious” 32-bit machine called the Fountainhead Project (which did fail in the sense that it was late to market and therefore never commercialized). The project code name was “Eagle”, and the eventual product was called the Data General Eclipse MV/8000.
The architecture was microcoded, which meant that a significant chunk of firmware had to be developed in parallel with the finalization of the hardware design. As part of the project, the team used two wire-wrapped prototype machines, which turned into a serious bottleneck for testing the microcode. This was foreseen, and the project did indeed consider using a simulator of the system as a way to get more (virtual) development machines. However, the optimistic time plan for the overall project indicated that the simulator would not be ready until after the machine was complete (page 162 in the paperback edition):
This time, Alsing insisted. They could not build Eagle in anything like a year if they had to debug all the microcode on prototypes. [...] Alsing wanted a program that would behave as a perfected Eagle, so that they could debug their microcode separately from the hardware.
West [overall manager for the project] said: “Go ahead. But I betchya it’ll all be over by the time you get it done.”
Simulators were not new at this time, according to the people interviewed in the book they had been used at least since the late 1960′s. The seasoned engineer and group manager Alsing thought it would take about 18 months to build a full simulator for Eagle… but he had a bright new programmer (Neal Firth) that thought he could do it in two months (the clear advantage of a lack of experience).
He didn’t know what he could not do. “I think after our little talk, Neal had the picture in his mind that he knew all about simulators now. It was no problem. He could do it over the weekend.”
Alsing also put an experienced programmer, Peck, on the job in parallel. He finished a “quick and dirty” simulator in six weeks. While Firth got his out in about four months – but that simulator had many more helpful features. It took another few months to really finish it up, but in six months they had a fully functional and very helpful simulator in place. That is one third of the estimate of the experienced manager, which kind of worries me. Do we always get that pessimistic as we gain experience? Is there a way to harness raw enthusiasm in this way that works even in an established organization? Interesting questions.
In any case, the result of this was impressive. The book reads like the familiar sales pitch for contemporary virtual platforms today (page 165 in the paperback edition):
As it was in fact, the Microteam [the team doing the microcode firmware] could test their code right at their desks, via their own terminals. [...] They merely had to feed Trixie [the team's existing 16-bit work computer] the microcode they wanted to test, order up the simulator, and command it to run their code.
The simulator was also a great debug tool, in particular thanks to its ability to do record-replay debugging:
They could order the simulator to stop working at any point in a microprogram. The simulator could not tell the microcoders all by itself what was wrong with their code, but it arranged for the storage of all the necessary information about what had taken place while the code was running, and would play it all back upon demand. Thus, without having to invent ingenious approaches with logic analyzers, the team could examine each little step in their microprograms. They could find out what was going wrong in an instant, in many cases. In the Microteam’s small corner of the world, Firth’s [simulator program] was an heroic act.
The simulator was a pretty complex piece of software, since it had to accurately simulate all the simultaneous activity inside multiple Eagle circuit boards. There were all kinds of intricate interdependencies, all taking place within a single clock cycle. It really was a model of the electronics down to very small details, not just of some abstracted design. It was truly behavior-accurate, and it had to change as the hardware design was debugged and refined (which for the hardware designers meant changing the contents and wiring of the wire wrap boards).
The simulator is quoted as running about 100000 (100 thousand) times slower than the real thing, which I found interesting as that is very similar to the slow-down that we see today for cycle-accurate simulators of current computers. Interesting to see the same relationship hold 30 years ago.
If we look at the book itself, it is a sometimes a riveting read. There is excitement and a deep story told by Kidder, despite the fact that it is all based on real events, real people, and a real machine. I also think you can read it as a management book – there are many discussions about technologists and how to lead them, and some revelations about the need to not tell the “troops” too much of what is going on…
Recommended, not just for the few pages spent on the simulator.