Today, when developing embedded control systems, it is standard practice to test control algorithms against some kind of “world model”, “plant model” or “environment simulator”.
Using a simulated control system or a virtual platform running the actual control system code, connected to the world model lets you test the control system in a completely virtual and simulated environment (see for example my Trinity of Simulation blog post from a few years ago). This practice of simulating the environment for a control computer is long-standing in the aerospace field in particular, and I have found that it goes back at least to the Apollo program.
The Apollo Guidance Computer (AGC)
According to the book Digital Apollo – Human and Machine in Spaceflight, by David Mindell, The Apollo program appears to have been the first instance of simulating a real-time control computer along with its operational environment. During the design phase of the Apollo system in the early 1960s, they ended up putting much more functionality into the central digital computer than anyone had foreseen from the start. Actually, using a general-purpose digital computer for the purpose of control was a novelty. Until this point, control systems had been basically analog and special-purpose, but for Apollo a prototypically modern system design was used with several sensors and actuators connected to a single central computer.
The computer itself, the AGC, was a single computer. Current avionics practice is to use triple redundancy as a way to mitigate both hardware and software problems (this was also used in the Space Shuttle program that followed Apollo).
In the days of Apollo, the computers were still too big and heavy and power-hungry to have more than one on board. Instead, the AGC was built to a very exacting standard, using extreme hardware reliability rather than redundancy to ensure mission reliability. The hardware of the day did make this simpler than it is today, since circuits were not as sensitive to cosmic rays and electrical upsets. Memory was core memory, essentially immune to anything short of a wire cutter.
Still, the first few AGCs that flew were built in a modular fashion with the assumption that astronauts would be making repairs to the equipment in flight! However, this basically meant carrying an entire spare computer around, which was rather too costly in weight and space. Also, a Mercury flight had demonstrated that moisture from the cabin could cause short-circuits and cause electronics to fail – which made it clear that the AGC had to be sealed to protect it.
Thus, the final AGC was a single sealed computer unit. There was one in the moon lander and one in the command module (main capsule).
In the end, as I understand it, there were a few back-ups to use if the AGC failed. Ground control was quite capable of tracking and flying the Apollo – some early assumptions had been that the Soviets would try to jam any ground-based signals to Apollo, but it became clear in the early 1960s that this was not a reasonable assumption in peace time. Furthermore, there were astronauts in the capsule that could at least get some things done (even if actual hands-on piloting was not good enough for reentry).
Software Testing in Apollo using Simulation
The decision to use a central digital guidance computer meant that software was critical to the mission success. Software development never quite reached the manpower for the hardware development, but even so the MIT instrumentation lab (the prime contractor) employed some 400 software developers at the peak of the software effort – plus an unknown number at subcontractors.
The total size of the software was 36864 16-bit words, or just a little more than 64 KiB. That is not a lot of room for a very challenging mission, and they had to do all kinds of tricks to fit the mission software in there. The software was running on top of a novel dynamic priority-driven scheduler which was impossible to predict offline.
Thus, simulation and actual testing was necessary in order to explore the actual behavior of the software and control system during the mission. They ended up simulating the AGC using the much more available mainframe computers (Apollo had a big budget).
From David Mindell, Digital Apollo – Human and Machine in Spaceflight, MIT Press, 2008, page 148 (in the 2011 paperback edition):
The IL [MIT Instruments Laboratory, the main contractor for the Apollo control system] had a number of mainframes, including a Honeywell 1800 and the new IBM 360, which ran simulations of the Apollo computers. Indeed, throughout the program the software ran in simulation for validation on these machines, as well as on Apollo hardware.
He notes that running in simulation provides more insight into the software, which is a recurring theme in the simulation of digital computer systems (for example, see my blog posts about the Soul of a New Machine and various debugging exercises over the years). Running more slowly for added insight is a classic trade-off:
A digital simulator could analyze program operation in great detail, step by step, with a host of tracing and reporting tools, but at a comparatively low speed.
It gets really interesting when they had to bring in the environment to exercise the software:
The simulator included routines like UNIVERSE, LUNAR, and TERRAIN to model various environments, and even one called ASTRONAUT that simulated the human operator.
This being the early 1960s, we also have some non-digital components used to simulate the operational environment.
To complement these digital models, analog computers simulated the spacecraft’s dynamics, from center of mass and rocket thrust to parameters for structural bending and fuel slosh, connected to models of the AGC [Apollo Guidance Computer] that would run in real-time.
I.e., a real computer running real software connected to a simulated world. This kind of real-time world simulation is still being used today to exercise embedded software on the real control computers.
They also added a cockpit and operator in the loop:
Hybrid simulators mixed the two, and even included a sextant, inertial unit, and a full user interface, allowing engineers and astronauts to exercise the system from the front panel.
In the end, this meant that they could indeed simulate the entire mission – in order to check that software, hardware, user interfaces, and all other components would work together correctly.
Together, they amounted to building the Apollo spacecraft and traveling to the moon in a completely numerical, virtual environment, an electronic equivalent of the wind tunnels of an earlier era.
Full-Scale Flight Simulation
In addition to the simulation of the computer for the benefit of the software, the Apollo program also built several classic flight simulators: complete mockups of the lunar module and the command module, with projectors and audio and motors to provide a very realistic environment for training the astronauts in the mission. The Digital Apollo book states that the Apollo astronauts spent half of their training time in these simulators.
Lacking real-time graphics generation capabilities, they actually built a 1:2000 scale terrain model of the planned lunar landing sites and “flew” a camera over this to provide images for the lunar module windows!
It was impressive to realize that simulation and computer-based simulation was pervasive in the Apollo program. Guess it proves once again that most fundamental ideas in computing were invented before 1970…
The book where I learned this, Digital Apollo, can be recommended. It is very interesting both from a history of technology perspective, as well as from a user-interface and system-design perspective. The questions about the role of a human in piloting a craft raised in the Apollo program are still with us. With autonomous vehicles and advanced driver assistance programs becoming more common, it is a question that will have huge societal impact over the next few decades.