DVCon Europe 2018 / A Few Cool Papers

DVCon Europe took place in München, Bayern, Germany, on October 24 and 25, 2018. Here are some notes from the conference, including both general observations and some details on a few papers that were really quite interesting. This is not intended as an exhaustive replay, just my personal notes on what I found interesting.

DVCon Europe format

The conference gathered some 320 attendees, which continues in the increase that has been seen over the past few years. It is some distance to the 1000+ that attend DVCon in the US, but most people seem to agree that this is actually a rather good size since it make possible to find time to talk people.

The exhibition was sold out, with companies mostly targeting the verification and validation side of EDA, alongside the big-three EDA vendors (Mentor, Cadence, Synopsys).  Last year ARM was there too, showing off some cool ARM Fast Models, but this year they were missing.

The conference takes place across two days, in a format that has worked pretty well for the past few years:

  • Day 1 featured a keynote and a series of tutorials (90 minute each), covering topics in verification (like UVM), validation, virtual platforms, and some new hot topics like machine learning.
  • Day 2 featured a keynote, a panel, and the paper sessions where authors present their papers (30 minute each).  In order to fit all of this in, it started unusually early at 07.30 with the first keynote at 08.00… which was very thinly attended unfortunately.

The tutorials and paper sessions run four in parallel, but try to sort the presentations in topical areas so that you should be able to look at most of the papers on your favorite topic.

Virtual platforms

DVCon Europe is the best conference for Virtual Platform topics today, especially when combined with the SystemC Evolution Day (which was co-located with DVCon Europe for the second year running).

Automotive & VP.  Since the conference takes place in the automotive-heavy region of Bayern (and Germany in general), it is no surprise that the use of VPs in the automotive domain was a big topic. One key discussion is how to get virtual platforms into automotive supply chains, all the way from chip manufacturers through Tier-1 suppliers to the OEMs. What is needed? How should it be supplied? What kinds of models make sense – just fast virtual platforms, or also some kind of timing-accurate models?

On a more technical side I think it is necessary to build big federations of multiple simulators including all the already-existing simulations that are in place for the mechanical and electrical aspects of a vehicle and its environment. The VPs for the computers in the car are but a small part of the overall system and will have to be integrated into the full-car-and-road-and-environment simulation setup featuring many different types of simulations.

Tools & VP. In a panel about VP in automotive, Ingo Feldner from Bosch brought up a small but very important point: that current tools that connect or should connect to a VP – such as debuggers and rest-of-car simulators will have to be adapted to work with VPs. In particular, to deal with virtual time instead of real time. This is somethings I recognize well – if you connect a testing tool that expects real-time interaction to a VP that runs at a fraction of real time, things will not work well. The best solution is indeed to make sure all simulations and connected tools work on a single shared virtual time, rather than trying to guess the real-time extend of virtual time deadlines.

Digital twin

Once we build these federations of simulators that cover everything from a vehicle to the road it is driving on, the surrounding environment, models of other traffic and pedestrians, V2V communications, etc…. what do we end up with? A virtual prototype? A virtual platform? A simulator? Or something else?

There is a good chance this will end up being called Digital Twin.  This is a buzzword that has become more common in recent years, coming from the non-VP modeling community. It basically means that you have a system in a digital form that you can feed with real-world inputs to understand behavior, diagnose problems, and predict future performance (see How Simulation Plays a Part in the Gartner Top Ten Tech Trends for 2018  for more on Digital twins).

In the keynote from Stefan Jockusch from Siemens PLM, an even more ambitious vision of a digital twin-based continuum was presented:

In this vision, the design, production (as in building physical products), and maintenance/deployment all flow together.  The designers use virtual prototypes that include aspects like how the product fits in the production flow in the factory. Deployed systems collect data and send back home to the manufacturer to help understand the performance in the field.  All powered by digital simulation models of the product, the factory, the environment, etc.

Paper: Intelligent Virtual Platforms (“Valgrind in a VP”)

Ola Dahl, Mikael Eriksson, Udayan Prabir Sinha, and Daniel Haverås from Ericsson presented the paper “Intelligent Virtual Platforms” where they implement memory checking in Valgrind-style inside their in-house virtual platform for their in-house DSP.

Ola Dahl from Ericsson presenting the paper

Essentially, what they do is to build a special variant of their VP that adds tracking bits to all memory locations and registers, and update these bits as instructions are executed. This is the same fundamental idea as employed in Valgrind. But applied to a full-system simulation and not just to a single user program. Quoting the paper:

For each memory address, and for each register, we use validity bits (V-bits) for indication of validity. For each memory address, we use an accessibility bit (A-bit) for indication of accessibility. When software runs, V-bits and A-bits are updated. This is done, for each instruction executed, so that validity and accessibility are propagated.

Since this is running an operating system and multiple threads on multiple cores, the VP also has to be aware of the what the OS and runtime libraries are doing. The most obvious example is dealing with memory allocations.

Functionally, for the VP, a memory allocation is really just code running that sets some values in memory somewhere. There is nothing inherent in these operations that should make memory become valid or invalid to use, and pointers to the location correct or incorrect. To correctly update the tracking bits, the VP has to be aware of the higher-level operations of the target system so that it can identify memory management operations and update the state according to the semantics of the operations. The tool also identifies actions like sending messages between tasks and separate cores, and starting new tasks (the stack of the new task has to be marked as uninitialized memory).

The performance impact of this tracking is surprisingly low – 3 to 5x compared to the baseline simulator without tracking. This might be because the baseline is not very optimized to begin with, since Valgrind typically gets a 30x to 50x slowdown. I would suspect that if this was implemented in a heavily optimized simulator like Simics, the speed penalty would be higher.

The implementation overhead is basically 100% – the instruction set simulator has to update the implementation of each instruction to reflect the propagation of state bits.

Implementing a solution like this as a general VP feature is a bit more work. In this particular case, the runtime system and the virtual platform come from the same organization, and the runtime system is the only one that is used on the target. This greatly simplifies the practicalities of building the solution, since users are not installing arbitrary operating systems on the target or using different compilers and runtime libraries. The fact that the target is a DSP without virtual memory and swapping also makes it easier – in a general solution for something like Linux, we would in general have to track memory pages as they are swapped out to disk and managed by the virtual memory system. Not to mention how hard this would be for closed-source operating system where you would have to work things out from binaries.

Even with these caveats, this was really cool to see working, and it is clearly a very useful industrial solution.

Paper: Enabling Visual Design Verification Analytics

Markus Borg from SiCS, Andreas Brytting from KTH, and Daniel Hansson from Verifyter presented a paper about how you could visualize the health of a code base. It was applied to RTL code for ASIC design, but in practice it should work for any code base. In addition, they presented a cool machine learning application that tries to find “risky” commits.

Basically, the tools from Verifyter (in Lund, Sweden) work on information from commits and continuous integration systems. The visualization they provide looks at all files in a source code base, and collects data on which commits caused test failures in the CI system. The assumption is that all code is being continuously integrated and tested as it is being committed. Given that, they used the Unity game engine to build a “cityscape” where each “building” is a file, and the building are grouped by folders.  The height of a building shows how many commits have been made, and the color the overall health (green = no errors, yellow = some errors, red = only errors):

Daniel Hansson presenting his paper at DVCon, showing the visualization.

Some interesting observations from the talk:

  • If entire folders are all-green, something is broken. Most likely the tests are too weak. Nobody is that good that all their code always work. Thus, what you want are lots of yellow.
  • When only a few commits are made, it typically indicates a piece of code that is likely actually not being used at all.
  • For the yellow, a failure rate of something like 5% is healthy.  Once it goes into 15% it is time to start to look closely at what is going on.

In the talk, Daniel also presented the Verifyter Pindown tool. This is a tool that works with the code base to help identify broken commits and to do smart testing on the code that is most likely to break. This in turn is based on machine learning:

The machine learning here is applied to commits to code, to identify commits likely to cause problems. This is not done by looking at the code – it is all based on information around the commits. The algorithm is trained on several years of commit history before being  put into production use, and then periodically updated to keep learning.

They have tried hundreds of features extracted from commit histories and found some interesting features to look at in order to determine the risk of a particular commit:

  • Experience. Junior programmers tend to make more mistakes than senior programmers – and they identify junior/senior using information from the commits. There is no dipping into HR databases in order not to invade privacy, but rather an additional layer of learning to classify committers.
  • Time of commit. Commits at 17.00 on Friday are usually broken – it is “commit and go home for the weekend”. While commits before lunch are usually good, since they represent a task that is actually completed.
  • Activity level. How often a piece of code is touched has implications for the expected quality of commits.

Actually, there is a whole research community on this topic – what can be discovered by looking at source code repository activity.  Cool stuff. I really like the idea to look for problems not in the code but in the processes and people writing the code.

Simulink Register Maps

In the Mathworks booth, I learned that they have solved a problem that I noticed some years ago when considering how to include Matlab/Simulink algorithms that run on FPGAs into VPs: How to deal with the programming interfaces and other interfaces for an algorithm expressed in Matlab/Simulink and used for HDL/RTL generation.

As I understand it, you can now use blocks in Simulink to describe the register map for the system being developed. This means that a complete subsystem can be generated from Simulink, to be used in a way similar to how a “hand-made” block would be used with drivers accessing banks of control registers that a user specifies and that can be shared with the driver code.  Even more interesting, this can all be code-generated into SystemC TLM code for integration in a virtual platform.

This solves a problem that I have seen a few times – how to keep a VP up-do-date with an accelerator in an FPGA that is being continuously evolved. Using an FPGA to run the RTL code is not very efficient for things like broad programmer deployment and automatic testing and continuous integration.  With this code generation option, it seems that some really nice workflows could be built.

Desktop FPGA

Finally, in the Synopsys booth, I found a product concept that made me smile a bit.  A “desktop FPGA” – a Synopsys HAPS system designed to survive in the hostile environment of a typical software programming environment. You know, programmers are infamous for breaking stuff by spilling coffee and Jolt cola over their kit, leaving crumbs everywhere, and having no respect for ESD protection.

Sorry for the rather poor picture, it looked better on the phone

Basically, this is a mounting box for modular FPGA prototype bits that looks less like a construction kit and that is built to fit on desk. It has some generally useful buttons and connections on the front to make access easier for software developers. It is still a good question how broadly such systems can be employed since the per-unit cost is still way higher than alternative like virtual platforms.

Temporal Decoupling

I presented a paper on temporal decoupling, as I blogged about before: https://software.intel.com/en-us/blogs/2018/10/10/intel-talks-at-dvcon-europe.  I like the DVCon tradition to give speakers a traditional Bavarian cookie:

I still have the cookie from 2016 in my office… these things are cooked to last 🙂

4 thoughts on “DVCon Europe 2018 / A Few Cool Papers”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.