SystemC TLM-2.0 has just been released, and on the heels of that everyone in the EDA world is announcing various varieties of support. TLM-2.0-compliant models, tools that can run TLM-2.0 models, and existing modeling frameworks that are being updated to comply with the TLM-2.0 standard. All of this feeds a general feeling that the so-called Electronic System Level design market (according to Frank Schirrmeister of Synopsys, the term was coined by Gary Smith) is finally reaching a level of maturity where there is hope to grow the market by standards. This is something that has to happen, but it seems to be getting hijacked by a certain part of the market addressing the needs of a certain set of users.
There is more to virtual platforms than ESL. Much more. Remember the pure software people.
Edit: Maybe it is more correct to say “there is more to virtual platforms than SoC”, as that is what several very smart comments to this post has said. ESL is not necessarily tied to SoC, it is in theory at least a broader term. But currently, most tools retain an SoC focus.
The focus when is still on hardware design, even if that hardware design includes a whole lot of software now (50% or more of a modern SoC design effort is software, that is generally acknowledged at 90nm and below). There is a flow from design to implementation, and a worry that models are “validated” or “cycle-accurate” versus the hardware blocks they represent. This is all highly relevant and crucially important to commercial success in the case that you are indeed constructing a new SoC using new or reusing old IP blocks.
But it misses a large piece in the form of the broader software development market. I am not talking about general PC and server programmers who are not particularly likely to ever be using a hardware-design-style virtual platform, but everyone’s friend the embedded software programmer.
There is a very large body of work being performed in “pure software”, on systems built from standard chips bought from outside sources and put on customized boards (usually, but sometimes even the boards are very heavily standardized). High-value equipment manufacturers in telecom, datacom, military systems, automotive systems, etc. are today mostly writing software to standard chips that they buy from the SoC companies. If you are using something that modern. Discrete processors are still popular, as are boards similar to standard PCs. For these people, the idea of collecting models for IP blocks is silly. That is useful for the little FPGAs and ASICs used in various parts of the system as glue or for truly value-critical parts. But most of the time, they want models for common parts that they have no part in designing.
Their problem is to “give me a model for an Intel Core 2 Duo on an i965 chipset with a Broadcom Ethernet Switch and a standard IDE FLASH disk”. Or “we have a board with a Freescale MPC8548 and some PPC755s and a custom ATM controller”. Or “we are using a rad-hard PowerPC from the 1990s on a standard space board”. Or “I want an ARM that can run Linux and talk over Ethernet to the real world”. Quite often, multiple such systems have to be combined into a network of networks and run many gigabytes of complex distributed and fault-tolerant software.
For such people, the software is the key value and the hardware just a necessary evil to run it. Sure, hardware is very very important to the end-system success: you have to have lots of performance, low power consumption, robustness in the field, etc. But it is something you build by combining entire chips or even subsystems, not something designed from IP blocks.
The main concern here is really execution speed, pure and simple. And secondarily, software debug and analysis support. Hardware design support is irrelevant. When developing and testing software, you need to be able to quickly rerun test cases and validate the software functionality. You want support for inspecting what the software is doing, and recording and replaying the system behavior so you can repeat bugs. Virtual platforms are immensely useful for software developers, if they can handle the volume of software that a modern system contains.
This means that different models and different tool styles are needed. Some example points where this is quite different from where I see the general ESL market going:
- Most of the time, users do not actually know or care about all the bits and pieces making up the hardware. They are only using a subset of the functionality of a subset of the blocks in the chips they have. Models can use this to quickly get in place, by basically being developed in a lazy manner.
- You need support for interconnects between chips and boards and systems, not just the internal memory bus on a chip. The chip is where the action begins, not the final goal of the exercise.
- You need absolute portability and specific semantics for models, irrespective of host type or architecture. A software case can move from Power Architecture to x86 to SPARC host, all on the same virtual platform.
- Models can just as well be created for very old hardware as for upcoming cool new hardware. Testing software on a system that contains hardware designed a decade ago or more is not unusual. Most large systems contain legacy hardware and hardware that is still good enough for the job and deployed in the field a very long time after it was designed, manufactured, and sold.
- The best model writer is really an embedded software engineer, who can abstract to what software people are interested in. Not someone with intimate hardware knowledge, as they tend to retain unnecessary details in the models
- As long as models are fast enough to be useful, the language they are written in is irrelevant. You are going to be using the model, not developing it. If it is in C, C++, Java, some special-purpose modeling langauge, SystemC, Ada, assembler (heaven forbid), Fortran, Cobol, Lisp, Python, or Prolog does not matter one bit. As long as they can all be linked together in some form, all is nice and dandy. This is classic software thinking: use the language best suited for a particular task, and then just have all the pieces combine using some appropriate framework.
- The model has to work like the real hardware in terms of dynamism. Adding boards, removing boards, connecting and disconnecting networks, plugging in USB devices, and all other run-time hardware configuration that is possible in the physical system. This is quite different from a fixed chip setup and maybe a board that mostly see as the example in EDA-ESL land.
- The primary concern of large-scale software is correctness. Performance tuning is usually done by tweaking algorithms and balancing work across processors or nodes. But the main issue is getting very complex large software stacks to work at all, not to get the last 10% of performance out of it. This implies that models do not need to think much about detailed timing correctness, as long as it is reasonable on average (you get a reasonable amount of work done between timer interrupts, for example). In any case, the virtual execution progress of the processors and system per virtual time unit can be tweaked to approximate the averages of the real system — or to provoke nasty cases that could occur in real hardware.
- Sometimes, for software development, you can get away with something “sufficiently similar” to the real thing. For example, the google Android emulator is a pseudo bit of hardware that lets users cross-compile to ARM and use ARM coding just like for a real phone. But the actual Linux BSP used will not run on any real hardware, just a pseudo-model created to support software development in this instance.
This is not saying that was is being done with SystemC and ESL is bad, it is very valuable and a clear step forward. There is also a huge different problem set in software development that virtual platforms can be a solution for. Solving this problem too requires scaling up, scaling out, and radical abstraction thinking in the design and use of virtual platforms. ESL is just one piece of the broader virtual platform usage picture.