Now I am home again, and some days have passed since the IP 08 panel discussion about software and hardware virtual platforms. This was an EDA hardware-oriented conference, and thus the audience was quite interested in how to tie things to hardware design. Any case, it was a fun panel, and Pierre Bricaud did a good job of moderating and keeping things interesting.
Category Archives: Eda
On Wednesday this week, I will take part of a panel discussion about virtual platforms and using them for software development, at the IP08 conference in Grenoble in France. We have a good crew, including Markus Willems from Synopsys, Peter Flake from ELDA, and Loic le Toumelin from TI (who I have not met before).
Over at Taken for Granted, Grant Martin just did a very good write-up on the “accepted fact” that verification is seventy percent of a chip design effort. It is not exactly easy to prove this point, but is it really just an urban myth that has gained credibility by being repeated over and over again?
Go over there to see what he has to say.
Over the past few weeks there was a interesting exchange of blog posts, opinions, and ideas between Frank Schirrmeister of Synopsys and Ran Avinun of Cadence. It is about virtual platforms vs hardware emulation, and how to do low-power design “properly”. Quite an interesting exchange, and I think that Frank is a bit more right in his thinking about virtual platforms and how to use them. Read on for some comments on the exchange.
I keep looking out for interesting examples of parallel software, and there is constant trickle of these. This past week I spotted a couple of new ones in the EDA field: SPICE simulation and chip timing analysis.
Cadence technical blogger Jason Andrews wrote a short piece a couple of days ago on his perception that host-based execution is becoming unncessary thanks to fast virtual platforms. In “Is Host-Code Execution History“, he tells the story of a technique from long time ago where a target program was executed directly on the host, and memory accesses captured and passed to a Verilog simulator. The problem being solved was the lack of a simulator for the MIPS processor in use, and the solution was pretty fast and easy to use. Quite interesting, and well worth a read.
However, like all host-compiled execution (which I also like to call API-level simulation) it suffered from some problems, and virtual platforms today might offer the speed of host-compiled simulation without all the problems.
This is a short maybe heretic post on the topic of architecture exploration.
It just struck me the other day that the idea prevalent in chip design that you want to explore the design space of a certain with great detail and precision might be the wrong way to go about things. It is in some sense similar to the ideas of planned economies: you decide a priori what is important, and try to optimize the design to do just that. If you are right, it is brilliant. If you are wrong, it can be very wrong. That is scary to say the least. It seems to assume that you have a pretty good idea of what you need to achieve, and that this is not likely to change for the lifetime of a design.
I just read an opinion-provoking piece “Software developer attitudes: just get on with it” by Frank Schirrmeister, as well as the article “Life imitating art: Hardware development imitating software development” by Glenn Perry that he linked to. Both these articles touch on the long-standing question of who does development the “best” in computing. I have heard these arguments many times, where software developers think that there is something mythical about hardware development that makes things work so much better with much fewer bugs, and hardware people looking at the speed of development and fanciful fireworks of coding that software engineers can do. It could be a case of the grass always looking greener on the other side… but there are some concrete things that are relevant here.
Model-based architecture (MDA) or model-based development is an idea that to me comes from the automotive field. To, it means that you use some tool that is capable of modeling both a computer controller system and the environment being controlled to create a simulation world where computer control and environment meet and the characteristics of the controller can be ascertained quickly. The key is to not have to convert controller algorithms to concrete code, and not have to run concrete code on concrete hardware against physical prototypes to test the controllers. Today, this seems to be applied to many fields where you are creating control systems (automotive, aviation, robotics). The tools are math-based like MatLab and LabView, along with special programming environments based on UML and StateCharts.
What is interesting is that most of these tools are graphical in nature. And they do seem to work quite well, which is quite surprising given the otherwise poor record of graphical programming as opposed to text-based programming. There were a pile of graphical programming environments in the 1980′s, none of which amounted to much. What survived and prospered were the good old text-based languages like C, C++, Java, VisualBasic, etc. In practice, it seems like it is very hard to beat sequential text when it is time to actual get code working. More efficient programming seems to boil down to having to write less text and having text which is easier to write (for example, dynamic typing, rich libraries, garbage collection, and other modern language features that remove intellectual burdens from the programmer).
But graphics do seem to work for domain-specific cases (like control engineering or signal processing), especially for data-flow-style problems. And for abstract architecture work. So there has to be something to it… but what?
The book “Taxonomies for the Development and Verification of Digital Systems“, edited by Brian Bailey, Grant Martin, and Thomas Andersson, was published in 2005 by Springer Verlag. It is a legacy of the defunct VSIA, and presents an attempt to bring order to nomenclature and taxonomies in the chip design field (its scope is defined to be broader than that, but in essence, the book is about SoC design for the most part).
In a funny coincidence, I published an article at SCDSource.com about the need for cycle-accurate models for virtual platforms on the same day that ARM announced that they were selling their cycle-accurate simulators and associated tool chain to Carbon Technology. That makes one wonder where cycle-accuracy is going, or whether it is a valid idea at all… is ARM right or am I right, or are we both right since we are talking about different things?
Let’s look at this in more detail.
YouTube – Freescale QorIQ P4080 Hybrid Simulation is a video of a demo of the QorIQ P4080 hybrid simulation. Cool of Freescale to be publishing it like this, I think it is a very smart move!
Updated: Here is the video inline, let’s see if this works.