Some more thoughts on how to program multicore machines that did not make it into my original posting from last week. Some of this was discussed at the multicore day, and others I have been thinking about for some time now.
One of the best ways to handle any hard problem is to make it “somebody else’s problem“. In computer science this is also known as abstraction, and it is a very useful principle for designing more productive programming languages and environments. Basically, the idea I am after is to let a programmer focus on the problem at hand, leaving somebody else to fill in the details and map the problem solution onto the execution substrate.
This has been the eternal striving of programming language designers. We created FORTRAN to get out of assembly language, we added object orientation to reduce the amount of explicit data juggling, we designed PROLOG so that backtracking search was automatic. We have SDL and UML to generate code from graphical descriptions, and the model-driven architecture trend in automotive software. Always trying to remove implementation details from the solution description, to make it possible to tackle bigger problems with less work.
In the particular case of multicore and parallel computers, we have languages like Erlang that let you express a threaded parallel program without dealing with locks, shared memory, or other quicksand. Tim Mattson called the idea behind Erlang “shared nothing”, i.e., there is no shared mutable state and only explicit message passing between threads. This style of programming is eminently suitable to certain classes of problems. It is probably hard to code a high-performance game engine or a database in Erlang, but a control system or a telecom switch is natural. The key property is that an Erlang program is normally massively threaded by its very design, since threads are to Erlang what objects are to C++, Java, and its ilk.
What happens here is that the problem of how to schedule and coordinate the threads is handed on to the run-time system for Erlang. This is a huge win, since that run-time system can be created once by a small group of people, thoroughly debugged thanks to its comparatively limited size, and then used by all Erlang systems all over the world. Without changing the existing code, which is fairly amazing if you think of it. But Erlang is not really alone in applying the “smart runtime” principle to parallel problems.
The X10 language is really the same idea, if yet with more features in the language. Put most of the hairy stuff in a run-time system the programmer does not have to care about.
Game engines allow scripting the logic and tactics of individual bots or other non-player entities in a sequential fashion. This easily translates to a parallel program, since there are usually many such non-player entities in play at any particular time. The various entities can look at the world to make decisions, but that is easily done as read-only access, achieving the principle of no shared mutable state. The actions to be carried out can be queued and then managed in a coherent manner by the run-time engine. The engine is sure a hairy piece of work, but once it is done, the majority of the game design, coding, and tweaking can be done without much caring for the parallel execution of the code.
Another example is dataflow programs,Â which are natural in domains such as control and streaming data processing. The compilation and run-time system has enough knowledge and sufficiently few constraints to be able to schedule individual small computation pieces on one or more processors, and stage the communications between them. Just like Erlang or games, the key issue here is the absence of shared mutable data.
I really think that this is the right way to tackle the era of multicore. Most problems can probably be expressed in a manner that is naturally parallel without involving explicit manipulation of shared data, and these programs can be parallelized using smart compilers and run-times rather than programmers’ blood, sweat, and tears.