Two Perspectives on Modeling

When I started learning about virtual platforms after joining Virtutech back in 2002, the guiding principle of our team was very much one of “model just enough to make the software happy – and no more”.This view was fairly uncontested at the time, and shared (implicitly or explicitly) by everybody developing virtual platforms from a software perspective. There is a second perspective, though, from the hardware design world. From their viewpoint, a model needs to be complete. Both views have their merits.

The Software Perspective

The modeling philosophy of tools like Simics (in which I include tools like Qemu, IBM Mambo, IBM Cecsim and innumerable efforts to simulate various old computers to run their software) takes the perspective of a software developer: as long as the software has something to run on that works, the completeness of the model when compared to a real machine is fairly uninteresting. I described it like this in my Embedded Systems Conference 2008 talk on virtual platforms:

It is a bit tongue-in-cheek but captures the essential spirit.

When the software you are interested in running runs, the work is done. Any effort spent on too much depth or breadth is wasted, as it adds no real value to the end user. Obviously, this is not a matter of black-or-white, all or nothing. The definition of “enough” is very context-dependent.

In practice, the reason that this philosophy was adopted was the customers for the simulators (virtual platforms) were concerned about software development for standard chips that they were buying from outside parties. These chips are rarely perfect fits for a particular system, but tend to contain a superset of the functionality. In this way, the same chip can be used in many different systems, providing economies of scale for all parties involved.

This meant that the original specification for a virtual system would include some units that would not be modeled. In other units, only certain operation modes would be modeled (it is surprising just how many different ways conceptually simple things like Ethernet controllers or serial ports can be used). The final result would be a model like this:

The A, B, etc. boxes are various subunits or operation modes of the hardware units. The system is an example, and any resemblance to any real chip, living or dead, is not intentional. We will make use of all categories in the legend later.

The key point is that some parts of the chip are left for later. The model is sufficient to run the software and solve the customer’s problem.

When a second target system comes along using the same basic chip,  it is usually necessary to fill in some of the gaps in the original model. Extensions could be caused by using a different operating system that drives the hardware units in a different way, an application that actually uses some previously unused features of the hardware, or upgrades to the software that make more aggressive use of advanced hardware operating modes.

The net result would be similar to this:

In this example, the new OS used a watchdog timer (W) that was not used by the previous OS. We are making use of more features in the Ethernet driver. The new application only uses a single CPU core to run, leaving the second core idle. After modeling the pink pieces, we are still far from a “complete” model – but we have made two customers happy and have (hopefully) hundreds of software developers hacking away delivering software.

Working in this way, a model is quickly extended to meet each successive customer’s need. There are likely parts that never get modeled as they are never needed. That might indicate that they never got used in practice – or, more likely, that there are other users of the chip that never requested a virtual platform for it. Which brings us to the second view of modeling, the hardware perspective.

The Hardware Perspective

If you are a hardware vendor, you tend to have a different view of what a virtual platform is all about. You want to equip your customers and partners with a virtual chip, and you have to assume that at least someone will be using every feature and unit of your fantastic new chip. From a hardware vendor, users expect a virtual copy of the hardware – not just a useful subset.

For the first user above, the resulting system state would be this:

We have large parts which are modeled but not used. This represents a waste from the perspective of this user. But it is a waste that does not hurt the user (we assume that unused units do not slow the simulation down).  If we look across more uses, the waste is much less. In use the second example use above, there no need for any additional modeling:

Synthesis

So far, I have presented two different perspectives. One is the one that I have been living for a long time and one its opposite. In a panel debate at a conference, it makes for an excellent topic. Both sides can claim to be right, and claim that the other side is incorrect, uninformed, dangerous, or just stupid. Great fun and a great show… but also cause for severe misunderstanding and friction between proponents for the two modeling traditions, and maybe not the best way to move forward as n industry.

There is always room for compromise and synthesis. Let’s look at a stylized illustration of the modeling effort spent over time in these two approaches:

The red blobs represent effort, and the arrows show when different users get the platform they need. The bottom case corresponds to the hardware perspective on modeling, and the top to the software perspective.

In this example, users 1 and 2 would get models faster by the step-by-step modeling preferred by the software perspective. User 3 would be the same, and user 4 shows that by having a complete model, there is no delay for additional modeling as new users sign up. What I want to show is that there is no absolute best-in-all-cases modeling strategy: it all depends on the circumstances.

The diagram is potentially a bit misleading… it is maybe not entirely reasonable to have the modeling effort for a new hardware start at the same time as use-case driven modeling. Since the hardware is new, there are probably no users for it yet. More likely, software-perspective use-case-driven modeling is applied when the hardware is already complete, sold, and designed into an OEM system. The hardware-perspective completeness-driven modeling is much more applicable in a presilicon setting, where the virtual platform is used to design-in and enable early software development.

Still, the two approaches are not completely incompatible. Even in a presilicon hardware-driven setting, it is often possible to start to deliver partial platforms early. Key customers tend to know what they need and do not need from a future hardware platform, and are often quite willing to get something started early even if not all pieces are there yet.

To port the core of an operating system, not all peripheral devices need to be in place. To create a virtual platform that can talk to a network, Ethernet is needed – but the support for Serial Rapid IO or PCIe for a rack backplane can be delivered later. In this way, an eventually complete hardware-perspective virtual platform can be delivered in increments that minimize software developer waiting.

2 thoughts on “Two Perspectives on Modeling”

  1. Jakob

    A very interesting writeup and set of observations. I would just add that if a processor or processor family is done right (as we feel we to at Tensilica) – using an automated, configurable, extensible processor generation process based on an Architectural Description Language – then the generation of correct models, and a variety of them to suit both hardware and software developers – cycle accurate and instruction accurate (fast functional) – is much easier than the modelling efforts done for purely custom processor designs. Once you build such a process, it takes much less effort to modify it as the microarchitecture(s) evolve, and thus models come out when the next generation of processor(s) emerges without a noticeable time lag. Tensilica is not alone in this approach of course. And you need not limit the process to just the processor core alone – you can move to make all interfaces configurable and model production automated for more and more complex platforms. A little forethought, and a reasonable amount of planning and hard work, pay off in huge ways downstream, if you think of developing an automated process rather than just a single product.

  2. Thanks, Grant.

    The only problem I have with that approach is that you rarely build everything from scratch using good tools – as we all know. Also, it is not clear that the tools will generate just what you need: models that are fast enough or featured enough for a certain user.

    The tools issue also only apply when you are the silicon designer. More often than not, the modeler is the user of the silicon, which makes the software perspective more applicable.

    I wonder if any developers of simple things like serial ports and PCI controllers generate their models from tools. I rather think they just write them by hand once, and then keep them around. Note that I totally skipped the issue of reuse in the above post – quite often, you are modeling version N of an existing hardware, and with models from version N-1 available, as much as 90% of your job can easily already have been done for you.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.