Notes from the IP 08 Panel

Now I am home again, and some days have passed since the IP 08 panel discussion about software and hardware virtual platforms. This was an EDA hardware-oriented conference, and thus the audience was quite interested in how to tie things to hardware design. Any case, it was a fun panel, and Pierre Bricaud did a good job of moderating and keeping things interesting.

The panel had a clear consensus, which nobody really challenged, that virtual platforms for software development are different in kind from virtual platforms for hardware development. Indeed, a the taxonomy of “hardware virtual platforms” versus “software virtual platforms” was used frequently and proved quite appropriate.

A software virtual platform has to be fast and its timing can be fairly approximate. It main value, in this context, is that can be created quickly and is useful for early software development and debug. Opinions differed, however, on how to produce them and where to go with them.

  • Markus Willems from Synopsys had the position that they are produced in some appropriate way as a separate task from hardware development. SystemC was his language of choice.
  • Peter Flake proposed a methodology where you start by developing the software virtual platform and then refine it down towards more detailed models and finally hardware. He brought up Virtutech DML and SystemRDL, as examples of languages pointing in this direction.
  • Loic Le Toumelin considered the software virtual platform as a something that is generated from a common design entry point, using some form of synthesis that can also generate the hardware and the hardware virtual platform.
  • I think my realistic position right now is that a software virtual platform is created as a separate item, but that we want to make this work as short and easy as possible and that in the future, the vision is similar to Peter Flake’s: start with a software virtual platform to define the hardware-software interface.

It was also interesting in how different the opinion was when we got to the detailed hardware-oriented virtual platforms. The ones that tend to be clock-cycle level and attempt to be cycle-accurate (CA) in many cases.

  • Markus said that the only good way to build a CA model was to take the RTL and convert it, or run it in an FPGA prototype. He echoed the sentiments I wrote about in July, that ARM is getting out of cycle-accurate models and the general difficulty of creating such a model by hand.
  • Peter pointed out that you can have CA models before RTL, as a design tool. I strongly agree with this model of working, it is common in industry and definitely one way to go. However, for existing hardware, I agree that RTL-to-CA seems reasonable, even if the resulting models are painfully slow.
  • Loic wanted the CA to come from the same source as the software VP, and was very keen on their being in complete agreement on semantics of the hardware.

The third major discussion was about the required accuracy and fidelity-to-hardware of a virtual platform. With a consensus that a software virtual platform has to be fast and with timing approximated, it is still clear that many people are uncomfortable about this idea of not being “exactly like the hardware”.

For some purposes, you do need complete fidelity to the hardware timing in a CA model. Loic definitely could not accept anything less when giving a customer a virtual platform, and some people in the audience echoed the same sentiment. Most, however, agreed that most software work can be done with simple timing, and that it does not matter all that much if there are some functionality bugs or omissions in the virtual platform. It is still far better than no platform at all!

What is clearly needed, at least for virtual platforms close to a hardware design process, is a way to check the software virtual platform and hardware virtual platform against the functionality and maybe timing of the final RTL. In the cases that you have the RTL, which is far from always in my world.

There were some other questions about software development tools support (of course you use the same debugger and compiler as with a physical platform) and other issues where the panel was mostly in agreement. I guess some of this also indicates that virtual platforms are not yet universally understood and that most people have not really had any experience with them.

Overall, this was a fun panel, and I hope the audience enjoyed it too and learnt something in the process.

3 thoughts on “Notes from the IP 08 Panel”

  1. Jakob, an excellent and useful summary of the panel. I wish I had been able to be there. The observations on how to derive the various kinds of virtual platforms points up again the virtue (pun intended) of deriving all the models and deliverables for a piece of IP – from fast functional VP kinds of simulation models through to the actual RTL implementation plus testbenches – through an automated configuration and generation process (as we do at Tensilica). Only by pre-planning the IP creation, configuration, generation and delivery process, and ensuring a highly automated flow for creating all deliverables, can one end up with simultaneous availability of the at least two levels of models one needs for virtual platforms at fast functional and cycle accurate level – and only in this way can one measure the fidelity between the levels. I am still surprised at the difficulty much larger companies have in adopting this strategy.
    Grant

  2. Thanks for the correction of the post.

    A comment-on-comment:

    deriving all the models and deliverables for a piece of IP – from fast functional VP kinds of simulation models through to the actual RTL implementation plus testbenches – through an automated configuration and generation process (as we do at Tensilica).

    I think that is an excellent point, but that achieving this in practice for general hardware seems hardware than when using a more focused tool such as the Tensilica system. Being “restricted” to that particular class of problem should make this much simpler than when applying it to arbitrary hardware.

    Also, the second problem which the panel did not address (as this was about new chip design mainly due to the context in a chip design conference) is how to model all the stuff that is already out there… most of a new system is old hardware that already exists, and that also needs to be modeled to get a complete system to run software on. And there, good old manual programming of device models is hard to get around.

Leave a Reply

Your email address will not be published. Required fields are marked *