Simple Machine, Hard to Simulate

In the June 2010 issue of Communications of the ACM, as well as the April 2010 edition of the ACM Queue magazine, George Phillips discusses the development of a simulator for the graphics system of the 1977 Tandy-RadioShack TRS-80 home computer.  It is a very interesting read for all interested in simulation, as well as a good example of just why this kind of old hardware is much harder to simulate than more recent machines.

You really should read the article to get the full story. The short summary is that while the basic principle of the graphics display is very simple to simulate, the effect of rewriting the display contents as it is being drawn on the CRT is quite difficult to get right.

I found this picture of the system online, for reference of what the graphics might look like:

The hardware of the TRS-80 works pretty much like my old ZX Spectrum did: a bit of memory is used to hold image data (single buffer), and then the video hardware simply reads this memory as the CRT scan goes by to display the right picture. There is no locking of the display memory during this time, so the processor can race the video hardware and modify the contents of a location in memory between scan lines. This is indeed done, and that’s what makes simulating the machine much much harder.

On the TRS-80, racing the scan makes it possible to (for example) increase the apparent vertical resolution of the display (since each graphics “pixel” actually consists of four vertical pixels that could not be individually addressed). On the ZX Spectrum, you could increase the apparent vertical color resolution. On the Commodore 64, I think you change the graphics palette during redraw to allow more colors to be displayed simultaneously.

For a simulator, this is pure pain. The result of graphics code written for those machines essentially depends on making a very precise simulation of the timing of all processor instructions, as well the behavior of the video hardware. As noted in the article, you need to model the memory access contention resulting from the processor writing as the display memory simultaneously with the video hardware reading it. You need to know the cycle count of each pixel, and the setup time between each row of pixels. To account for effects like dithering on a modern perfectly stable LCD display, you have to apply filters to the basic bitmap, simulating the fuzziness of the old cheap TVs that these computers used to drive. If you want to simulate the horrible tricks used to make music on the ZX Spectrum’s 1-bit sound output (as I recall it, you made noise by flipping a bit in an OUT instruction on the Z-80 CPU), you probably need to do a bit of analog waveform simulation.

Indeed, it seems to me that one enabler for today’s virtual platforms is that we have hardware that is not entangled in low-level timing like this. Instead, thanks to the variability of execution time in modern processors, hardware is asynchronous and tends to use interrupts or status bits in registers to report when a requested operation is complete. You do not see code of the type “do X, wait precisely Y cycles, and then do Z” anymore, and that really helps in changing the target system timing to enable fast simulation – which is what any fast virtual platform has to do, replacing a real processor with variable timing with an ISS with pretty simple instruction timing. It also means that hardware models can be simpler too, since they do not model how the hardware achieves its work, only what that work is. To loop back to the article prompting this blog post, in a modern virtual platform, all you would simulate is the specified effect of setting graphics bytes in memory. Not the incidental effect of how it is drawn onto a TV, scan-line by scan-line.

2 thoughts on “Simple Machine, Hard to Simulate”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.