Adding to Schirrmeister’s Virtual Platform Myth Busting

opinionFrank Schirrmeister of Synopsys recently published a blog post called “Busting Virtual Platform Myths – Part 1: “Virtual Platforms are for application software only”. In it, he is refuting a claim by Eve that virtual platforms are for application-level software-development only, basically claiming that they are mostly for driver and OS development and citing some Synopsys-Virtio Innovator examples of such uses. In his view, most appication-software is being developed using host-compiled techniques.  I want to add to this refutal by adding that application-software is surely a very important — and large — use case for virtual platforms.

The beginning of the argument was found in an EDA Design Line article titled “Unified Verification for Hardware and Embedded Software Developers” by Lauro Ritazzi of Eve USA. In it, he makes the following claim:

While some may have achieved the scope of jump-starting software development, they only address application programs that do not require an accurate representation of the underling hardware design. They fall short when testing the interaction of the embedded software with hardware, such as firmware, device drivers, operating systems and diagnostics. For this testing, embedded software developers need an accurate model of the hardware to validate their code, while hardware designers need fairly complete software to fully validate their application specific integrated circuit (ASIC) or SoC.

The interesting part here is really that jump-start is just for applications, and that OS and drivers require more details than a fast virtual platform can supply. I do not quite agree with this. But let’s first see what Frank Schirrmeister said:

the majority of the software development on virtual platforms is spent on firmware, device drivers, operating system porting and diagnostics. And that is not – as one could assume – on cycle accurate models, but on functionally accurate models with only essential timing, the type of models called loosely timed (LT) in SystemC.

I totally agree with this. As is evident from many different public use cases, OS, BSP, and driver development is a big use of virtual platforms. For example, last summer, Freescale announced the QorIQ P4080 with pretty good software support in terms of Linux and VxWorks operating systems, as well as some middleware stacks. All developed on Simics using an even more timing-abstracted model of the hardware.

However, Frank then makes the following claim that I have a harder time with:

In contrast, application software is developed more often than not using completely hardware independent techniques, including cross compilation from the host development machine using development kits like Apple’s iPhone development kit.

This is to some extent true, but as time goes on, I think this type of development environment is going to be less useful. Traditionally, OS vendors have had tools like VxSim and OSE SoftKernel in place to help customers “run code on their desktop”, while using the API of the operating system of choice. However, such solutions have lots of problems in how close they can get to the target.

  • If you have any kind of third-party binary-only application, or want to use an existing binary component without lots of complex recompilation, you need a virtual platform running the underlying OS. You cannot squeeze that into a host-compiled API simulator.
  • You are not using the same compiler and code-generation settings and build settings as you are for your actual target, and this can (read: will) introduce nasty compiler version issues.
  • It forces you to maintain an additional build variant for your code, which can be pretty expensive for a complex build.
  • You are not using the real OS scheduler, device drivers, and interrupt structure found on the target system. This can have a huge impact, especially for multithreaded multiprocessor systems.
  • The API simulator needs to be kept in synch with the real software stack, and customized in the same way for any particular target. This is hard to get right (even though it has been done).
  • The API simulator does not handle heterogeneous systems very well, such as chips or boards or racks mixing two or more different OS kernels in the same system (like a DSP and a main processor OS).
  • API simulation completely falls apart when the OS is no longer the lowest level of the software stack, but you also have a hypervisor layer underneath the OSes on your target system. An API simulator simply cannot represent this kind of case.
  • Using a virtual platform and the real target binaires also fits with the very important “fly what you test, test what you fly” principle of embedded software development.

For various subsets of these reasons, I see many users picking up virtual platforms as a way to streamline application development. For example, NASA recently selected a virtual platform based on Simics to develop the software for the new Orion spacecraft. That is going to be a complete software stack, not just OS and drivers, which tend to to be fairly off-the-shelf component for these kinds of systems. Most of the effort is on the application level, and the platform used is a virtual platform.

However, note that there are cases where a fast virtual platform like we are discussing here is not sufficient to validate all aspects of the code. I think the main reason we see different viewpoints on this, is that we are looking at very different types of software-hardware integration.

In a blog post I wrote last year on the dead-ness of cycle-accurate simulation, Grant Martin of Tensilica pointed out that some software desperately needs cycle-accuracy as it is intimately dependent on the timing of the hardware. This is certainly true for some aspects of drivers, and more so for the really early boot code.

Here, FPGA-based hardware-accelerated simulation of the actual design in VHDL or Verilog makes eminent sense as a way to get the details perfectly right. But that is only one part of a much greater system development puzzle, and it really only applies to very small subsystems as  it is kind of hard to fit much more than a single chip inside a hardware acceleration unit. Just as Frank Schirrmeister says, hardware accelerated simulation is very important. The nice article on the IBM z10 development that I blogged about earlier says exactly that: for some parts of the validation, there is no way around using the actual hardware RTL design.

And in the end, you have to test the timing and analogue aspects of a design on physical hardware anyway. There should not be too many suprises at this stage, if you have used all of the cool current tools right. But there surely will be some — even a VHDL simulation is a simulation, and not reality, after all.

2 thoughts on “Adding to Schirrmeister’s Virtual Platform Myth Busting”

  1. Jakob: I think we are in violent agreement. With respect to Application Software the point I was trying to make was that often – like in Apple’s iPhone devkit – programming against an hardware abstraction level, i.e. their API, can work just fine. This largely depends on the quality of the API and how well maintained it is. The hardware abstraction layers provided in Apple’s iPhone, OMAP, DaVinci, nExperia and others make it possible to swap out the underlying hardware without necessarily changing the higher level application sofwtare at all. So to refine the original statement, I still would claim that the higher up users get in application software development, the less dependent on hardware they want to be. However, I also agree with your assertion that virtual platform still makes sense at that level, especially to find out issues like “will it pick up the call while I am IMing, watching a video and play a game at the same time”. Best, Frank

  2. Frank Schirrmeister :Jakob: I think we are in violent agreement. With respect to Application Software the point I was trying to make was that often – like in Apple’s iPhone devkit – programming against an hardware abstraction level, i.e. their API, can work just fine. This largely depends on the quality of the API and how well maintained it is. The hardware abstraction layers provided in Apple’s iPhone, OMAP, DaVinci, nExperia and others make it possible to swap out the underlying hardware without necessarily changing the higher level application sofwtare at all. So to refine the original statement, I still would claim that the higher up users get in application software development, the less dependent on hardware they want to be.

    My main problem with this is that you still have to provide a second implementation of that “hardware independent” layer — there is no real difference to any other API-level simulator. You still need to check the code on the real target, in its real environment. So all of the above still applies, there is no real difference between this and an OS API.
    The value of such a layer is really rather that you can port code easily — but I still think you need to test with the complete real stack for each target. Portability ease does not equal portable correctness, unfortunately, for anything that is non-trivial, in my experience.

    /jakob

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.