Schirrmeister has a nice observation on model-based development

Model-based architecture (MDA) or model-based development is an idea that to me comes from the automotive field. To, it means that you use some tool that is capable of modeling both a computer controller system and the environment being controlled to create a simulation world where computer control and environment meet and the characteristics of the controller can be ascertained quickly. The key is to not have to convert controller algorithms to concrete code, and not have to run concrete code on concrete hardware against physical prototypes to test the controllers. Today, this seems to be applied to many fields where you are creating control systems (automotive, aviation, robotics). The tools are math-based like MatLab and LabView, along with special programming environments based on UML and StateCharts.

What is interesting is that most of these tools are graphical in nature. And they do seem to work quite well, which is quite surprising given the otherwise poor record of graphical programming as opposed to text-based programming. There were a pile of graphical programming environments in the 1980’s, none of which amounted to much. What survived and prospered were the good old text-based languages like C, C++, Java, VisualBasic, etc.  In practice, it seems like it is very hard to beat sequential text when it is time to actual get code working. More efficient programming seems to boil down to having to write less text and having text which is easier to write (for example, dynamic typing, rich libraries, garbage collection, and other modern language features that remove intellectual burdens from the programmer).

But graphics do seem to work for domain-specific cases (like control engineering or signal processing), especially for data-flow-style problems. And for abstract architecture work. So there has to be something to it… but what?

Continue reading “Schirrmeister has a nice observation on model-based development”

What’s the Obsession with C in EDA?

In early July, Cadence announced their new “C2S” C-to-silicon compiler. This event was marked with some excitement and blogging in the EDA space (SCDSource, EDN-Wilson, CDM-Martin, to give some links for more reading). At core, I agree that what they are doing is fairly cool — taking an essentially hardware-unrelated sequential program in C and creating hardware from it. The kind of heavy technology that I have come to admire in the EDA space.

But I have to ask: why start with C?

Continue reading “What’s the Obsession with C in EDA?”

Is SoC (was: ESL) all there is to virtual platforms?

SystemC TLM-2.0 has just been released, and on the heels of that everyone in the EDA world is announcing various varieties of support. TLM-2.0-compliant models, tools that can run TLM-2.0 models, and existing modeling frameworks that are being updated to comply with the TLM-2.0 standard. All of this feeds a general feeling that the so-called Electronic System Level design market (according to Frank Schirrmeister of Synopsys, the term was coined by Gary Smith) is finally reaching a level of maturity where there is hope to grow the market by standards. This is something that has to happen, but it seems to be getting hijacked by a certain part of the market addressing the needs of a certain set of users.

There is more to virtual platforms than ESL. Much more. Remember the pure software people.

Edit: Maybe it is more correct to say “there is more to virtual platforms than SoC”, as that is what several very smart comments to this post has said. ESL is not necessarily tied to SoC, it is in theory at least a broader term. But currently, most tools retain an SoC focus.

Continue reading “Is SoC (was: ESL) all there is to virtual platforms?”

Real-time control when cores become free

ImageA very interesting idea that has been bandied around for a while in manycore land is the notion that in the future, we will see a total inversion in today’s cost intuition for computers. Today, we are all versed in the idea that processor cores and processing times are quite precious, while memory is free. For best performance, you need to care about the cache system, but in the end, the goal is to keep those processor pipelines as busy as possible. Processors have traditionally been the most expensive part of a system, and ideas such as Integrated Modular Avionics are invented to make the best use of a resource perceived as rare and expensive…

But is that really always going to be true? Is it reasonably to think of CPU cores are being free but other resources as expensive? And what happens to program and system design then?

Continue reading “Real-time control when cores become free”

Heterogeneous vs homogeneous systems, revisited

I got another email from my friend with the thesis that processors will become ever more homogeneous as time goes on, while I believe in a relative heterogenezation (is that a word?) of computer architecture with many special-purpose accelerators and helper processors. This argument is put forward in a previous blog post. In this round, the arguments for homogenization are from the gaming world.

Continue reading “Heterogeneous vs homogeneous systems, revisited”

Dekker’s Algorithm Does not Work, as Expected

Sometimes it is very reassuring that certain things do not work when tested in practice, especially when you have been telling people that for a long time. In my talks about Debugging Multicore Systems at the Embedded Systems Conference Silicon Valley in 2006 and 2007, I had a fairly long discussion about relaxed or weak memory consistency models and their effect on parallel software when run on a truly concurrent machine. I used Dekker’s Algorithm as an example of code that works just fine on a single-processor machine with a multitasking operating system, but that fails to work on a dual-processor machine. Over Christmas, I finally did a practical test of just how easy it was to make it fail in reality. Which turned out to showcase some interesting properties of various types and brands of hardware and software.

Continue reading “Dekker’s Algorithm Does not Work, as Expected”

Book Review: Intel’s Multicore Programming Book

Multicore programming book coverThe book “Multicore Programming – Increasing Performance through Software Multithreading” by Shameem Akhter and Jason Roberts is part of a series of books put out by Intel in their multicore software push. In case you have not noticed, Intel has a huge market push currently where they give seminars, publish articles and books, and give curricula to universities in order to get more parallel software in place. I read this book recently, and here is a short review.
Continue reading “Book Review: Intel’s Multicore Programming Book”

When Multicore makes Things Simpler, like IMA

Most of the time when talking about the impact of multicore processing on software, we complain that it makes the software more complicated because it has to cope with the additional complexities of parallelism. There are some cases, however, when moving to multicore hardware allows a software structure to be simplified. The case of Integrated Modular Avionics (IMA) and the honestly idiotic design of the ARINC 653 standard is one such case.
Continue reading “When Multicore makes Things Simpler, like IMA”

Øredev 2007

Öredev logo

Just like in 2006, I went to the Øredev conference in Malmö and presented a workshop using Virtutech Simics. This year, I worked with Jonas Svennebring from Freescale and we created a workshop around parallelizing network processing software for running on a multicore Freescale processor. The workshop went reasonably well, and the participants definitely learned something about what we trying to get across, even though we did not have much time to actualy complete the programming assignments.

Continue reading “Øredev 2007”

Homogeneous and Heterogeneous Multicore vs Programmers

An old colleague just sent me an email bringing up a discussion we had last year, where he was a strong proponent for the homogeneous model of a multiprocessor. The root of that discussion was the difference between the Xbox 360 and Playstation 3 processors. The Xbox 360 has a three-core, two-threads-per-core homogeneous PowerPC main processor called the Xenon (plus a graphics processor, obviously), while the PS3 has a Cell processor with a single two-threaded PowerPC core and seven SPEs, Synergistic Processing Elements (basically DSP-like SIMD machines).

In the game business, it is clear that the Xenon CPU is considered easier to code for. This means that even though the Cell processor clearly has higher theoretical raw performance, in practical the two machines are about equal in power since it is harder to make use of the Cell. Which seems to be a fact.

So here, homogeneous systems do appear to have it easier among programmers. However, I do not believe that that extends to all systems, all the time, everywhere.

Continue reading “Homogeneous and Heterogeneous Multicore vs Programmers”

SICS Multicore Day 2007 – More on Programming

Some more thoughts on how to program multicore machines that did not make it into my original posting from last week. Some of this was discussed at the multicore day, and others I have been thinking about for some time now.

One of the best ways to handle any hard problem is to make it “somebody else’s problem“. In computer science this is also known as abstraction, and it is a very useful principle for designing more productive programming languages and environments. Basically, the idea I am after is to let a programmer focus on the problem at hand, leaving somebody else to fill in the details and map the problem solution onto the execution substrate.

Continue reading “SICS Multicore Day 2007 – More on Programming”