Mark Nelson’s Multicore Non-Panic and Embedded Systems

Via I just found an interesting article from last Summer, about the actual non-imminence of the end of the computing world as we know it due to multicore. Written by Mark Nelson, the article makes some relevant and mostly correct claims, as long as we keep to the desktop land that he knows best. So here is a look at these claims in the context of embedded systems.

1. On the desktop, current few-way multicore solutions do seem to give immediate benefit thanks to the vast number of threads executing as background tasks and similar in a modern Windows installation.

2. A few more ways of multicores will be gobbled up by eyecandy work as Linux, Windows, and OS X keep fighting on what OS looks the best. And this means lots of easily parallelized threads.

3. Long-term, things look bleaker. How do we make use of a 32-way or even 128-way general-purpose machine?

In the embedded systems that I know and love, claim 1 certainly holds up in many cases. Control-plane applications in core network and telecom systems do feature piles of threads today, and can quite easily be scaled out onto a few cores using SMP. This is what ARM has also been advocating is the case for most of the mobile phone workloads that today run on single ARM cores. Using an ARM multicore will work fine up to four cores, since there is ample threads to go around inside a modern phone. All you need is the OS to be SMP capable, and that seems to be finally happening with the last big RTOSes announcing SMP versions this fall.

Note that there is a different way of using initial multicores in the embedded world, by consolidating what used to be several processors onto a single chip. Basically, using a dualcore processor as a natural replacement for two singlecore processors, pretty much running the same workload. This scenario uses two (or more) different operating systems in AMP mode (see In this way, it is quite likely that quite a few systems can take advantage 2, 3, 4, and maybe even 8-way systems without much work.

Claim 2 makes no sense in the embedded field. At least not in the sense that “your platform software provider will add end-user benefits that eat up more CPU and that does not require you to update your own code”. Maybe you could claim this for mobile phones, but mainstream mobile phone OSes like Symbian have not exactly been aggressive on this front. People don’t seem to be looking for eye candy of that kind in phones — currently at least (update: see comment on this, the iPhone could be changing this tenet).

Claim 3 is applicable. At least for control-oriented applications that run on general-purpose shared-memory machines. Unless you count on a continuation of the consolidation trend: imagine a system where you combine more and more boards from a current rack onto a single chip, or add “more boards” by adding in more AMP operating system instances. It makes sense, since in many cases the actual applications feature ample parallelism that today is exploited by using multiple boards or discrete processing units working in close cooperation to handle the volumes of work present.

For media and radio interface applications, you have a real easy time to use “any” amount of parallelism. But that is more similar to the GPUs used in current PCs than the case for the main processor(s) which is being discussed here.

Long-term, PC/desktop/server computing and embedded computing do have some common challenge of using many cores effectively. But the advantage of embedded computing is that most application domains are effectively parallel by nature, and “all” you have to do is find a way to move that parallelism onto a single chip.

His final statement is that:

Our industry press thrives on a good crisis. The switch to multicore processors has presented the brain trust with the opportunity to drum up a convincing one, and they haven’t let us down. Just try to take it with a grain of salt. The crises we’ve had in the past have mostly been resolved with boring, step-wise evolution, and this one will be no different. Maybe 15 or 20 years from now we’ll be writing code in some new transaction based language that spreads a program effortlessly across hundreds of cores. Or, more likely, we’ll still be writing code in C++, Java, and .Net, and we’ll have clever tools that accomplish the same result.

I think he is right about this, and that the end result will be a set of fairly ugly domain-specific frameworks that makes parallel programming reasonably easy. Just like GUI coding frameworks popped up when GUIs were new, relieving you of the tediousness of writing all the plumbing code. But it took a few years to nail down what was to go into a framework and their

2 thoughts on “Mark Nelson’s Multicore Non-Panic and Embedded Systems”

  1. About claim 2 for embedded systems: I would actually say that this *does* apply for mobile phones. If anything, I think the hype around Iphone testifies to that.

  2. The iPhone is interesting, you are right about that. It could actually be time for a revolution in eye-candy for these machines. And that could neatly use up a core or two in an ARM MP setup. So let’s give the phones from free cores.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.