Timesharing System Design Concepts (1970)

When I recently turned 50, a friend of mine gave me a book that was about as old as me – Timesharing System Design Concepts, by Richard W Watson. The copyright date is 1970, but the publishing date found online 1971. The book covers the hardware support and software techniques needed to provide multiple users with simultaneous access to a computer system. Typically, using remote teletype terminals. It is a wonderful book, reflecting on the state of computer technology around 1970. It shows both how many of the basic concepts we still use today were already in use back then, but at the same time just how far we have evolved computing since.

True system design

The book deals with system design, from an age where designing a system from scratch was the norm. A commercial vendor or academic project would build a custom CPU and memory system, custom input-output units, and their own interconnect. They would create the operating system needed to support the hardware, as well as tools for compiling programs and editing source code. There is no talk about using off-the-shelf processors or other ready-to-use hardware. This is about designing from the ground up. Compatibility is not a concern – even though IBM had already figured that software compatibility would be very important to commercial computing in the design of the System/360 from the early 1960s.

Things were simpler

Hardware was a lot simpler in 1970. Memory address lengths in the systems used as examples tend to be 16 bits. This quote about processor core design really made me laugh… definitely not true anymore (just look at the number of units and mechanisms in a modern processor core as exemplified by the recently unveiled Intel Golden Cove):

The CPU is designed using a fixed number of registers, decoding modules, arithmetic-logic modules and control modules. These modules are interconnected with a few well-defined data-transfer paths and control-signal paths…  Besides the high level of modularity, one should recognize in CPU design the small number of basic control and data-transfer mechanisms used.

Note the absence of today-mandatory mechanisms like pipelines, caches, memory management units, translation look-aside buffers, etc.

The book refers to some existing designs and use them for illustrating concepts, most prominently the XDS-940, which at least partially came out of academic research. The machine was used by the prominent time-sharing vendor Tymshare, which you could say was one of the very early cloud computing pioneers – they sold computer access to companies who could not afford their own dedicated computers. The year being 1970, the Multics system is also used to show how some more complex ideas like paging can be implemented.

Design thinking

The introductory chapter has a section on the design process that you can read as either quite dated or fully modern.

First of all, we have the notion of simplicity and cohesiveness in the system design. The author is of the opinion that large teams lead to bad designs, and bad designs will end up having problems in implementation. I quote:

In fact, if the system is so complex that one man cannot understand the general functioning and interconnection of the major modules and submodules, there is danger that the design will slip out of control.

I think that this can still be applied to current systems, but with modules and submodules that are probably a million times bigger (in terms of hardware transistor contents at least). By increasing the level of abstraction and use of modularization and layering, we should strive to achieve the same level of understandability for the systems we design. The call for a system to be designed with economy in concepts is unfortunately something that is very rare today.

A core design process idea is the early critique and discussion about the design, before any implementation starts.

Experience also indicates that it is good practice not to begin implementation until the design has been criticized and documented. Managers and designers are often impatient to produce working programs, but usually the best way to speed up the total system development is to go slow in the critical design stages.

You could interpret this as “old waterfall” and discard the idea of up-front documentation. On the other hand, my personal experience very much agrees with this – if you jump into coding with a clear plan and idea for what to build and how it is architected, the result will most likely be a mess. Writing down a design and discussing it at a high level of abstraction will find the big issues a lot faster and with much less cost than just going ahead and creating code and only discovering the real problems later on. And of course, the right way to do this today is to use simulation to work out the architecture – current systems are too complex to be properly evaluated by hand and mind.

I found a very good quote that encompasses something I totally believe in:

If a design cannot be clearly explained, it is probably not understood.

In Swedish, this was brilliantly said by Esaias Tegner in 1820: “det dunkelt sagda är det dunkelt tänkta” – basically, if the wording of an idea strikes you as muddled, the thinking behind it is likely to be muddled too. To explain something clearly, you have to clearly understand it. Forcing designers to really explain what they have done tends to result in new insights – and something the finding of flaws. Communication is a key part of any design process.

Testing is important

I am a strong believer in the need to test a system, and that testing is an undervalued part of the computing profession. The book provides another insightful quote:

Design of tests … is a difficult and demanding job and must be given as high a priority as the actual implementation is all submodules are to be integrated easily into the final system.

What can you do except agree? Good testing requires skill and cunning.

Old concepts

Memory management using page tables was already in use and well-known. But while we consider paging as the right solution (in most cases at least) today, back then there was still extensive use of things like segment registers and simple base relocation registers. It was still early days for multiple programs or users sharing a machine, and it took a couple of decades to figure out which solutions were the most practical.   

This quote is indicative of the relative novelty of the paging solution:

Dynamic relocation using base registers, which requires programs to be located in contiguous areas of main memory… If, however, programs and main memory could be broken into small units and the program pieces could be located in corresponding sized blocks anywhere in main memory, then the possibility exists of utilizing main memory more effectively. Paging is the name given to a set of techniques which enable such uniform memory fragmentation to be implemented.

Later in the book, paging is compared to base registers, and while it is considered to be more complex and expensive to implement, it is still worth it. It is absolutely the case that more hardware is needed for it to be efficient (it cannot be implemented purely in software), the software implementation is more complex, and it requires more memory. However:

In our view, the small extra cost of paging hardware seems worthwhile considering the extra flexibility offered the system programmer to experiment with CPU and memory allocation, and swapping algorithms.   

Using swapping between fast memory and slower storage in order to emulate a large virtual memory space was also a technique in use already in the 1960s. Today, it feels like swapping is rather uncommon – I can remember the days when my Macs routinely used disk to provide a memory virtually double the size of the RAM. Swapping was a constant pain then. Today, it seems that most OSes are configured to basically only use RAM. Swapping in pages from disk is one way to manage the loading of programs and data, but the virtual expansion of main memory typically not so much.

The separation of code into supervisor/system and user mode is presented, but not as a mandatory universal concept. Rather, this is another concept that makes sense, but that is yet to be considered universally required. Indeed, when I read the book I often think that the simple concepts are the ones we still use today in deeply embedded systems, while the advanced concepts have become the baseline for our general-purpose machines.

Scheduling multiple tasks with different priorities, swapping out programs that are waiting for hardware, and prioritizing interactive sessions are all covered in the book. Once again, today we do this in rather more sophisticated ways, including scheduling to make the best use of heterogeneous processor cores, cache reuse, and non-uniform memory access times. None of these issues really existed in 1970, while today they are common.

Hardware accelerators were arguably more prominent in 1970 than they would become later in the 1970s and 1980s. In mainframe machines, it was and is standard practice to have offload engines to deal with input and output tasks. The book brings up a number of examples of I/O structures, including the use of programmable subsystems. For example, talking about the SCC-6700 computer system:

All I/O processors are identical […] These processors have a subset of the instruction set of the CPU, and they also can execute normal I/O instructions. Each I/O processor has its own memory, and there is an instruction in all processors to transfer a block of data from system main memory to an I/O processor memory.

What is also interesting is the mention of the use of special management processors to schedule user work on the main processors. In the years since, my picture is that software scheduling has ended up being done by software on the main system processor cores, essentially doing self-scheduling. However, separating management and work is a common pattern for more specialized computing and in particular inside complex accelerators today.

The book does talk about multiple processors working together, but I get the sense the idea is typically to have different programs running on different processors as a way to increase overall capacity. Not to spread a single program across multiple concurrent threads as a way to reduce compute times.  

For file systems, the requirements list provided as a guideline for designers covers what you would consider the standard set of operations on files. Users should be able to share files with other users in a controlled way, including specifying who gets access to a file and the type of access (read, write, execute). Maybe somewhat surprisingly, backup is mentioned as a basic capability mostly to protect against accidental file deletion or damage. There are also some requirements that are totally obvious today, such that file names and access paths being separated from the underlying hardware implementation of the storage.

From these examples, we can see that most of the basic concepts in computer system design were known enough to go into an academic textbook in 1970 (I suppose this is another instance of “the 1970 rule”, albeit with less emphasis on the work of IBM than usual).

Some things have changed

There are also examples of techniques and concepts that have been removed from the mainstream of computing. For example, it is not at all given that the same program can be run by multiple users at the same time on the same computer. Such code is called reentrant code or pure procedures, and have to be written without then-common techniques like self-modifying code or storing modifiable local variables within the code of the function itself.

The user interface considerations are rather primitive. It is considered pretty nice if a user is able to send in multiple-character commands from their teletype terminal to the remote system. When describing the implementation of such commands, the idea is basically that they are a way to call a function.

The command system collects a string of characters forming a command, decodes the command, and transfers to the routine for handling the decoded command. Any arguments for the command are passed to the routine also. The routine to handle the command may handle the command directly or create a process to perform the function.

Basically what you expect from a real-time operating system like VxWorks. This is before Unix introduced the idea of commands as programs to be loaded from the system file system.

Security?

Maybe the most significant change in computing since the 1970s has come in area of computer security. Back then, while computer security was being studied, it was not the most important problem for a standard system. The book never really discusses how to protect systems from attacks. The users were basically trusted, and there was no Internet over which arbitrary remote adversaries could reach the computer.

Still, something needs to be done to make sure that only the right users use the machine:

Because the remote terminals are anonymous…the user must present proper authentication. This identification is usually an account code and project or other name… System-resource use is metered during the session for later billing to the user’s account.

Nothing is said about how to handle passwords, how to store passwords in a way that cannot be reversed, multi-factor authentication, etc.

Note that the hardware and software systems in 1970 definitely used some protection mechanisms – user and supervisor modes are known concepts, for instance. The book mentions techniques like using execute-only pages used to avoid user code modifying operating-system code. It brings up the famously complex system employed in Multics, using multiple rings of different privileges. Access-right management via capabilities even for memory is mentioned.

When discussing files and file systems, consideration is given to allowing users to control and protect access to files. The system is not assumed to be totally open and all users trusted with everything.

Quality!

The writing in the book is of very high quality – the wording is precise, the sentences well-crafted, and it feels like the author and editors put in a lot of effort to polish the product. Creating a book in 1970 took a lot more effort in general than it does today, and this probably reflects in the quality of the results. Just guessing.

The illustrations are really nice too. In some old papers and books pictures can be pretty atrocious since drawing and reproducing them was honestly quite difficult. Here, they are crisp and clear, and must have been drawn using some kind of computer.

One thought on “Timesharing System Design Concepts (1970)”

  1. Thank you this brought back fond memories from 1977 when I was a Timesharing systems R&D lead in an NCR lab.

    One concept I remember as important and expected to see mentioned was Standard Work Unit which I remember as being from Bill Inmon’s work – of do I mis-remember?

    Cheers 🙂
    John

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.