SystemC TLM-2.0 has just been released, and on the heels of that everyone in the EDA world is announcing various varieties of support. TLM-2.0-compliant models, tools that can run TLM-2.0 models, and existing modeling frameworks that are being updated to comply with the TLM-2.0 standard. All of this feeds a general feeling that the so-called Electronic System Level design market (according to Frank Schirrmeister of Synopsys, the term was coined by Gary Smith) is finally reaching a level of maturity where there is hope to grow the market by standards. This is something that has to happen, but it seems to be getting hijacked by a certain part of the market addressing the needs of a certain set of users.
There is more to virtual platforms than ESL. Much more. Remember the pure software people.
Edit: Maybe it is more correct to say “there is more to virtual platforms than SoC”, as that is what several very smart comments to this post has said. ESL is not necessarily tied to SoC, it is in theory at least a broader term. But currently, most tools retain an SoC focus.
Reading the comments around TLM-2.0 and ESL (for example, from Frank Schirrmeister and Grant Martin), we only really get part of the picture.
The focus when is still on hardware design, even if that hardware design includes a whole lot of software now (50% or more of a modern SoC design effort is software, that is generally acknowledged at 90nm and below). There is a flow from design to implementation, and a worry that models are “validated” or “cycle-accurate” versus the hardware blocks they represent. This is all highly relevant and crucially important to commercial success in the case that you are indeed constructing a new SoC using new or reusing old IP blocks.
But it misses a large piece in the form of the broader software development market. I am not talking about general PC and server programmers who are not particularly likely to ever be using a hardware-design-style virtual platform, but everyone’s friend the embedded software programmer.
There is a very large body of work being performed in “pure software”, on systems built from standard chips bought from outside sources and put on customized boards (usually, but sometimes even the boards are very heavily standardized). High-value equipment manufacturers in telecom, datacom, military systems, automotive systems, etc. are today mostly writing software to standard chips that they buy from the SoC companies. If you are using something that modern. Discrete processors are still popular, as are boards similar to standard PCs. For these people, the idea of collecting models for IP blocks is silly. That is useful for the little FPGAs and ASICs used in various parts of the system as glue or for truly value-critical parts. But most of the time, they want models for common parts that they have no part in designing.
Their problem is to “give me a model for an Intel Core 2 Duo on an i965 chipset with a Broadcom Ethernet Switch and a standard IDE FLASH disk”. Or “we have a board with a Freescale MPC8548 and some PPC755s and a custom ATM controller”. Or “we are using a rad-hard PowerPC from the 1990s on a standard space board”. Or “I want an ARM that can run Linux and talk over Ethernet to the real world”. Quite often, multiple such systems have to be combined into a network of networks and run many gigabytes of complex distributed and fault-tolerant software.
For such people, the software is the key value and the hardware just a necessary evil to run it. Sure, hardware is very very important to the end-system success: you have to have lots of performance, low power consumption, robustness in the field, etc. But it is something you build by combining entire chips or even subsystems, not something designed from IP blocks.
The main concern here is really execution speed, pure and simple. And secondarily, software debug and analysis support. Hardware design support is irrelevant. When developing and testing software, you need to be able to quickly rerun test cases and validate the software functionality. You want support for inspecting what the software is doing, and recording and replaying the system behavior so you can repeat bugs. Virtual platforms are immensely useful for software developers, if they can handle the volume of software that a modern system contains.
This means that different models and different tool styles are needed. Some example points where this is quite different from where I see the general ESL market going:
- Most of the time, users do not actually know or care about all the bits and pieces making up the hardware. They are only using a subset of the functionality of a subset of the blocks in the chips they have. Models can use this to quickly get in place, by basically being developed in a lazy manner.
- You need support for interconnects between chips and boards and systems, not just the internal memory bus on a chip. The chip is where the action begins, not the final goal of the exercise.
- You need absolute portability and specific semantics for models, irrespective of host type or architecture. A software case can move from Power Architecture to x86 to SPARC host, all on the same virtual platform.
- Models can just as well be created for very old hardware as for upcoming cool new hardware. Testing software on a system that contains hardware designed a decade ago or more is not unusual. Most large systems contain legacy hardware and hardware that is still good enough for the job and deployed in the field a very long time after it was designed, manufactured, and sold.
- The best model writer is really an embedded software engineer, who can abstract to what software people are interested in. Not someone with intimate hardware knowledge, as they tend to retain unnecessary details in the models
- As long as models are fast enough to be useful, the language they are written in is irrelevant. You are going to be using the model, not developing it. If it is in C, C++, Java, some special-purpose modeling langauge, SystemC, Ada, assembler (heaven forbid), Fortran, Cobol, Lisp, Python, or Prolog does not matter one bit. As long as they can all be linked together in some form, all is nice and dandy. This is classic software thinking: use the language best suited for a particular task, and then just have all the pieces combine using some appropriate framework.
- The model has to work like the real hardware in terms of dynamism. Adding boards, removing boards, connecting and disconnecting networks, plugging in USB devices, and all other run-time hardware configuration that is possible in the physical system. This is quite different from a fixed chip setup and maybe a board that mostly see as the example in EDA-ESL land.
- The primary concern of large-scale software is correctness. Performance tuning is usually done by tweaking algorithms and balancing work across processors or nodes. But the main issue is getting very complex large software stacks to work at all, not to get the last 10% of performance out of it. This implies that models do not need to think much about detailed timing correctness, as long as it is reasonable on average (you get a reasonable amount of work done between timer interrupts, for example). In any case, the virtual execution progress of the processors and system per virtual time unit can be tweaked to approximate the averages of the real system — or to provoke nasty cases that could occur in real hardware.
- Sometimes, for software development, you can get away with something “sufficiently similar” to the real thing. For example, the google Android emulator is a pseudo bit of hardware that lets users cross-compile to ARM and use ARM coding just like for a real phone. But the actual Linux BSP used will not run on any real hardware, just a pseudo-model created to support software development in this instance.
This is not saying that was is being done with SystemC and ESL is bad, it is very valuable and a clear step forward. There is also a huge different problem set in software development that virtual platforms can be a solution for. Solving this problem too requires scaling up, scaling out, and radical abstraction thinking in the design and use of virtual platforms. ESL is just one piece of the broader virtual platform usage picture.
Updated the above with a couple of more bullet points.
Jakob
I think a better refinement to your note is that “there is more to Virtual Platforms than SoC (System-on-Chip)”. Because what you describe is a class of design that is STILL System-Level, and is still subsumed by the term “ESL”. If you read “ESL Design and Verification” by Bailey, Martin and Piziali (Elsevier 2007) you will see that ESL covers large systems of exactly the type and concerns you outline. However, you are right in that most of the “ESL tools” part of the EDA industry does not deal in the kinds of systems you are most concerned with – they are focused on SoCs. Virtutech is of course an exception! Perhaps as there is a wider understanding of the true scope of System Design, ESL and Virtual platforms, the scope of most of the commercial tools industry in ESL will broaden appropriately, or new members will arrive.
As someone who started in the computer industry and then worked for BNR/Nortel in telecommunications, I certainly agree with your concept that “there is a lot more to Systems, Horatio, than is dealt with in your SoC philosophy”!
I think you are absolutely right in making the SoC refinement to the note… and the frustration in the current wave of EDA/ESL tools is that they are still down at the chip level.
I think that the broadening is already happening. Companies like Virtutech that started out pretty far from SoC design (i.e., hardware design) is now nudging into the more detailed models arena, and companies traditionally in EDA are broadening the scope of the virtual platforms. SystemC is trying to be more appropriate to pure virtual platforms, and TLM-2.0 is the first step there.
Aloha!
(This is a bit of a personal rant I guess…)
Being EDA-conservative while trying to be on the egde I with John Cooley that ESL is here – But SystemC is going out with a deafining silence. 😉
I don’t thing the industry will run their shoes off to jump on the SystemC TLM-2.0 bandwagon. The reason for this is that everyone has already jumped onto SystemVerilog, and other languages and tools.
We need to move up in abstraction while retaining control at the bottom. SystemC tries to provide both at the same time, and just like the spork and the screwhammer it is pretty lousy in either end.
Jakob, Joachim:
I can only echo Grant’s comment that the correct refinement would be to replace ESL with SoC in your note. Given the last decade, it was really Virtutech and Synopsys/Virtio which focused on the full systems including the board peripherals. The other three players seem to be more SoC focused.
On Joachim’s note that “everyone has already jumped onto SystemVerilog, and other languages and tools” I guess the issue here is that the early adopters have been served with proprietary solutions and further growth has stalled becauce the mainstream requires more standardization. All these other tools have been using proprietary mechanism to stitch the models together, SystemC TLM-2.0 standardizes that and makes the models really interoperable while maintaining speed and flexibility. DMI brings you the fast memory accesses to keep processor models fast and the temporal decoupling allows you to let components synchronize flexibly. So the bandwagon for SystemC TLM-2.0 is really around virtual platforms. The comparison with SystemVerilog is a moot point as it is really great for hardware verification but not for modeling a virtual platform.
To Jakob’s note again, as long as the models itself are fast enough I do not care if they are done in Cobol…
I just updated the post to say “SoC” in the title. It is interesting how you can deal with this on the web — if this were a printed article a decade ago, we would have had some letters to the editor in the next issue, and then possibly a small note of discussion or correction in the issue after that.
Here, I could extensively rewrite the entire post to reflect the insights in the comments. But that also feels wrong, as it would destroy the original text that comments were about. Losing the historical record, in essence. I am enough of an academic and journalist to feel that that is not right.
So the compromise is to add a clarifying note to the head of the article, modify the title some, and put in this long “change note”. I think we need to treat text as something slightly more holy than source code… the reading experience of an svn or cvs changelog is not the best.
/jakob