Article on CPCI and ATCA Systems on Virtual System Development

The article/editoral “Using virtual platforms to improve AdvancedTCA software development practice” is now up at CompactPCI and AdvancedTCA Systems, an online and paper journal for the rack-based market. It is about our experience at Virtutech in using virtual platforms to drive system and software development for “pretty large” target systems, even those based on standard hardware.

And really, there is no such thing as a standard embedded system. Even if you use a standard backplane and buy off-the-shelf boards and cards to put in it, the combination of cards and added mezzanine cards makes each system quite unique. If you could use completely standard PC hardware for your system with no custom additions or special IO units, the thing would in likelihood not actually be an embedded system.

Continue reading “Article on CPCI and ATCA Systems on Virtual System Development”

What is Efficiency when Cores are Free?

More from the SiCS multicore days 2008.

There were some interesting comments on how to define efficiency in a world of plentiful cores. The theme from my previous blog post called “Real-Time Control when Cores Become Free” came up several times during the talks, panels, and discussions. It seems that this year, everybody agreed that we are heading to 100s or 1000s of “self-respecting” cores on a single chip, and that with that kind of core count, it is not too important to keep them all busy at all times at any cost. As I stated earlier, cores and instructions are now free, while other aspects are limiting, turning the classic optimization imperatives of computing on its head. Operating systems will become more about space-sharing than time-sharing, and it might make sense to dedicate processing cores to the sole job of impersonating peripheral units or doing polling work. Operating systems can also be simplified when the job of time-sharing is taken away, even if communications and resource management might well bring in some new interesting issues.

So, what is efficiency in this kind of environment?

Continue reading “What is Efficiency when Cores are Free?”

What’s the Obsession with C in EDA?

In early July, Cadence announced their new “C2S” C-to-silicon compiler. This event was marked with some excitement and blogging in the EDA space (SCDSource, EDN-Wilson, CDM-Martin, to give some links for more reading). At core, I agree that what they are doing is fairly cool — taking an essentially hardware-unrelated sequential program in C and creating hardware from it. The kind of heavy technology that I have come to admire in the EDA space.

But I have to ask: why start with C?

Continue reading “What’s the Obsession with C in EDA?”

EETimes Article on Multicore Debug

I have another short technical piece published about Multicore Debug at the EETimes (and their network of related publications, like Embedded.com). Pretty short piece, and they cut out some bits to make it fit their format. Nothing new to fans of virtual platforms for software development, basically we can use virtual platforms to reintroduce control over parallel and for all practical purposes chaotic hardware/software systems.

SCDSource Article: Combining Fast and Detailed Models

I have another opinion piece published over at SCDsource.com. The title, “Why virtual platforms need cycle-accurate models“, was their creation, not mine, and I think it is a little bit off the main message of the piece.The follow-up discussion is also fairly interesting.

The key thing that I want to get across is that we need virtual platforms where we can spend most of our time executing in a fast, not-very-detailed mode to get the software somewhere interesting. Once we get to the interesting spot, we can then switch to more detailed models to get detailed information about the software behavior and especially its low-level timing. Getting to that point in detailed mode is impossible since it would take too much time.

This is something that computer architecture researchers have been doing for a very long time, just look at how toolsets like SimpleScalar and Simics with the Wisconsin GEMS system use fast mode for “positioning” and more detailed execution for “measurement”. It is also what is now commercial with the Simics Freescale QorIQ P4080 Hybrid virtual platform. Tensilica also have the ability to switch mode in their toolchain.

See an upcoming post for more on how to get at the cycle-accurate models – this was just to point out that that the article is there, for symmetry with previous posts about my articles popping up in places.

Is SoC (was: ESL) all there is to virtual platforms?

SystemC TLM-2.0 has just been released, and on the heels of that everyone in the EDA world is announcing various varieties of support. TLM-2.0-compliant models, tools that can run TLM-2.0 models, and existing modeling frameworks that are being updated to comply with the TLM-2.0 standard. All of this feeds a general feeling that the so-called Electronic System Level design market (according to Frank Schirrmeister of Synopsys, the term was coined by Gary Smith) is finally reaching a level of maturity where there is hope to grow the market by standards. This is something that has to happen, but it seems to be getting hijacked by a certain part of the market addressing the needs of a certain set of users.

There is more to virtual platforms than ESL. Much more. Remember the pure software people.

Edit: Maybe it is more correct to say “there is more to virtual platforms than SoC”, as that is what several very smart comments to this post has said. ESL is not necessarily tied to SoC, it is in theory at least a broader term. But currently, most tools retain an SoC focus.

Continue reading “Is SoC (was: ESL) all there is to virtual platforms?”

Power Architecture Conference München 2008

Power.org LogoOn Tuesday next week, I will be presenting at the Power Architecture Conference (PAC) in München, Germany. The topics will be multicore debug using virtual hardware, and the new Simics Accelerator technology. Especially Simics Accelerator is pretty interesting technology.

It is a simple idea, using multiple host cores to run a virtual platform, with fairly amazing results. Now, using a single computer we can run fairly incredible simulations that were the realm of pure fantasy just a few years ago. We also got a nice new little box to demonstrate it with, an eight-core Dell with 16 GB of RAM. With 64-bit Linux, this thing makes my Core 2 Duo laptop with 32-bit Vista look like yesteryear’s snail…  And creates that giggling feeling that a really impressive new toy brings up in even the most grown up boys. Booting a 16-machine network of PowerPC boards was so fast it was not demoworthy.  I think we have to up the ante to some 100 target machines to make it interesting, and I have no doubt that a combination of multithreading and idle-loop optimization will make that thing be usefully interactive from the target command lines. There are many other wild things we could try on that demo box, once it gets back from the Power Architecture Conferences tour.

Real-time control when cores become free

ImageA very interesting idea that has been bandied around for a while in manycore land is the notion that in the future, we will see a total inversion in today’s cost intuition for computers. Today, we are all versed in the idea that processor cores and processing times are quite precious, while memory is free. For best performance, you need to care about the cache system, but in the end, the goal is to keep those processor pipelines as busy as possible. Processors have traditionally been the most expensive part of a system, and ideas such as Integrated Modular Avionics are invented to make the best use of a resource perceived as rare and expensive…

But is that really always going to be true? Is it reasonably to think of CPU cores are being free but other resources as expensive? And what happens to program and system design then?

Continue reading “Real-time control when cores become free”

Worst-Case Execution Time Survey Article in TECS

ACM Transactions on Embedded Computing Systems cover April 2008I just got another article published! In the April 2008 issue of the ACM Transactions on Embedded Computing Systems (TECS), we have an article called “The worst-case execution-time problem – overview of methods and survey of tools”. “We” is kind of understatement, the article has fifteen authors from three continents, and presents an overview of the state of the field of WCET (Worst-Case Execution Time) analysis. The article was started back in 2005, with submission in 2006, accepted in January of 2007, and then finally it appeared in 2008. It is probably my last shot in the WCET area where I did my PhD thesis (please see my list of publications for an idea of what all of that is about).

You can find the article at the ACM portal, or at the MRTC publications data base in Västerås.

ESC Silicon Valley 2008 — Report

Now the ESC SV 2008 is over. I really enjoyed going to the show this year, and presenting on simulation for embedded systems. The topic has to be heating up, I had some fifty people listen to the talk, which is really very good. Hope that they learnt how to build good transaction-level hardware models, and have some idea on how to apply this to their own projects. Hopefully, I can come back next year for the ESC 2009 (update: this did not happen) and do it again (even though the recent travel trouble makes it a less attractive idea to fly back here right now…).

Continue reading “ESC Silicon Valley 2008 — Report”

When Multicore makes Things Simpler, like IMA

Most of the time when talking about the impact of multicore processing on software, we complain that it makes the software more complicated because it has to cope with the additional complexities of parallelism. There are some cases, however, when moving to multicore hardware allows a software structure to be simplified. The case of Integrated Modular Avionics (IMA) and the honestly idiotic design of the ARINC 653 standard is one such case.
Continue reading “When Multicore makes Things Simpler, like IMA”

Homogeneous and Heterogeneous Multicore vs Programmers

An old colleague just sent me an email bringing up a discussion we had last year, where he was a strong proponent for the homogeneous model of a multiprocessor. The root of that discussion was the difference between the Xbox 360 and Playstation 3 processors. The Xbox 360 has a three-core, two-threads-per-core homogeneous PowerPC main processor called the Xenon (plus a graphics processor, obviously), while the PS3 has a Cell processor with a single two-threaded PowerPC core and seven SPEs, Synergistic Processing Elements (basically DSP-like SIMD machines).

In the game business, it is clear that the Xenon CPU is considered easier to code for. This means that even though the Cell processor clearly has higher theoretical raw performance, in practical the two machines are about equal in power since it is harder to make use of the Cell. Which seems to be a fact.

So here, homogeneous systems do appear to have it easier among programmers. However, I do not believe that that extends to all systems, all the time, everywhere.

Continue reading “Homogeneous and Heterogeneous Multicore vs Programmers”

Handbook of Real-Time and Embedded Systems

Cover of Handbook of Embedded and Real-Time Systems The “Handbook of Real-Time and Embedded Systems” (ToC, Amazon, CRC Press) is now out. I and my university research colleague and friend Andreas Ermedahl have written a chapter on worst-case execution time analysis. We talk some about the theories and techniques, but we try to discuss practical experience in actual industrial use. Both static, dynamic, and hybrid techniques are covered.

I just got my personal copy, but my first impression of the book overall is very positive. The contents seems quite practical to a large extent, not as academic as one might have feared. Do check it out if you are into the field. It is not a collection of research paper, rather instructive chapters informed by solid research but with applications in mind.

AMP vs Virtualization

It just dawned on me recently (and it sure must have been obvious to those working with configuring AMP — Assymtric Multiprocessing Systems) that in an AMP setup, the operating systems involved actually know about each other and have to account for the fact that they are sharing a single processor chip with other operating systems. So you cannot just take two single-core operating system images from an existing multiple-processor (local memory) solution and put them on a single chip and things just work. You do need to prepare the boot process and find a way to nicely share the common I/O devices, timers, accelerator engines and other resources on the chip. This is materially different from a virtualized setup.

Continue reading “AMP vs Virtualization”

SICS Multicore Day August 31

The SICS Multicore Day August 31 was a really great event! We had some fantastic speakers presenting the latest industry research view on multicores and how to program them. Marc Tremblay did the first presentation in Europe of Sun’s upcoming Rock processor. Tim Mattson from Intel tried hard to provoke the crowd, and Vijay Saraswat of IBM presented their X10 language. Erik Hagersten from Uppsala University provided a short scene-setting talk about how multicore is becoming the norm.

Continue reading “SICS Multicore Day August 31”