DVCon Europe 2021 – Testbenches, AI, and Open Source

Just like in 2020, the Design and Verification Conference (DVCon) Europe 2021 was a virtual conference. It took place from October 26 to 27, with the SystemC Evolution day on October 28 (as usual). As has been the case in recent years, the verification side of the conference is significantly larger than the design side. This is common with the other DVCon conferences in the world. In this blog, I will go through my main observations from DVCon Europe, and share some notes from some of the presentations.

Themes

This is my entirely subjective sense for what was hot this year. This was easier to pick up last year; this year was more sprawling and varied.

Testbenches – a lot of talk about test benches! How to reuse them between software and hardware, how to build them more efficiently, how to reuse test cases across different scopes, etc. The SystemVerilog-based Unified Verification Methodology (UVM) way of writing and executing test benches came up in many talks. Along with the Accellera Portable Stimulus Standard (PSS). Always targeting RTL.

AI and ML – Artificial Intelligence and Machine Learning are hot everywhere. Several talks, a panel, and a tutorial talked about how to use AI techniques to improve the verification process itself. One aspect is to help developers sort through large sets of data when debugging test failures. Another one is to use AI to improve how tests are run, by for example predicting which tests are more likely to find issues and thus should be run first (this was also mentioned at DVCon Europe 2020). AI can also be used to generate test cases in the first place, I guess as a way to go beyond classic directed random verification.

Shift-left. A favorite term for those of us working on virtual platforms. It can mean slightly different things to different people, but overall I would say it meant “doing verification and validation sooner”. Not

Software. How can software be part of the verification process (for hardware)? Software-hardware co-verification is necessary for modern system where the software workloads really matter and can be very variable. For example, validating an AI inference accelerator really means running interesting AI workloads on it. Which means that you need have quite a bit of software in place to do meaningful validation, and to also understand what it does with the hardware.

Open source. There was one keynote speech about open-source, and several papers covered open-source solutions in the hardware verification space. There is a small but growing community of open-source EDA tools, in contrast to the traditionally rather closed business model.

Keynote: Petra Färm: Speed Layers

Dr Petra Färm from Tolpagorni Product Management in Stockholm, Sweden, presented a different kind of keynote. More inspirational and thought-provoking about how we do our jobs, rather than about technology per se. It was about on product management and how you can structure your product and technology in order to both move fast and innovate – without destabilizing the core.

The talk featured no slides and just a few hand-drawn images for illustration. Mostly it was just Petra talking. There was a live Menti.com poll of the audience in the middle of the talk! Very impressive technique.

The idea behind the Speed Layers is to split the work on a product into three layers or streams that are designed to be independent and work at different rates.

1: Need for speed – exploration

  • Prototypes, ideas, testing, …
  • Customization, support tickets, sales calls, …
  • Short time frames, but also nothing that is considered a supported feature

2: Keep the edge  – edge

  • Core product management – the rhythm of the product – defined by market. 
  • Stay competitive with other products, add new supported features, …
  • At the edge, you should expect competitors to pick up new ideas and copy them.

3: Mind the core  – core

  • Foundation. Platform. Core.
  • Could be a technology, or an architecture, or a process, or something else.
  • It is really the secret sauce in your product, the part that is hard for competitors to match.
  • This takes longer to change, but is also going to last longer. 
  • The core has to be robust and not corrupted by the faster layers.

A key issue is how you manage transitions between the layers. To keep the edge and core clean, it is necessary to scrap and reimplement anything that comes from the speed layer. You should not implement speed features with actual productization in mind (that slows you down), and you should not allow the heavy-on-technical-debt

Petra provided some examples of the layers in action.

The financial system is one good example. The banks still have their classic stable back-end accounting systems in place. But they have also managed to build much more agile and fast-changing front-ends, enabling online banking and other services. The back- and front-end have been effectively decoupled, allowing fast innovation and letting the online offerings (Edge) develop much faster than the core.

Another example is the Dolby company. Their core beat is the eight years it takes to build a new generation of audio standards. But they still work at the edge with getting their technology into each new generation of TVs (about twice per year), launches of new software versions or features, and just updates to their software. They also allow themselves to experiment in the speed layer, doing demos and minor events and trials.

Keynote: Rashid Attar: 5G and AI for the Edge

The keynote by Rashid Attar, from Qualcomm in the US was very heavy on buzzwords – but also filled them with actual contents. Rashid joined us from late night San Diego, thank you!

Edge – we will see more AI/ML workloads being run at the edge. Privacy, efficiency, latency all indicate that doing everything in a central cloud datacenter is sub-optimal. Instead, doing computation close where the information is collected or actions need to be taken makes a lot of sense as long as there is enough compute power at the edge.  

Federated learning – a part of the edge trend is that edge devices can work together to learn. Without necessarily involving a central data center. This is once again a bit more privacy-preserving than shipping all sensor data and user information to a central server, while still giving many of the same benefits to end users .

5G is not for today’s apps – it is for industrial and other applications, really. 4G gives us what we need for internet-connected phones. 5G is more about the local loop.

5G networks together with AI/ML and edge computing work to change how the world works. Improvements in AI can drive better 5G by making better decisions. AI benefits from 5G by allowing more data gathering. These two technology trends are mutually supporting.

“Boundless XR” – Qualcomm marketing term. I take it to mean their work on split rendering, where an augmented reality (AR) application can render some parts of a scene in the headset or local device, but also offload some to a nearby gaming computer or even edge-based compute server. 5G is a key enabler to allow short-range high-bandwidth low-latency communications within a room or house.

In the end, it is about building new hardware to power these new technologies. Building the right hardware requires understanding the software and algorithms. Qualcomm is not just building a modem or mobile SoC, but have to look at the whole system and the applications to build the right hardware. Honestly, it was always like that, but the applications were easier to understand 20 years ago when mobiles were getting started. Even 10 years ago it was pretty easy to understand what was going on. Today, not so much.

Keynote: Satish Sundaredan: Take a Leap Virtualization in Future Development

Satish Sundaredan, from Elektrobit India, provided the automotive area keynote (since DVCon is held in Germany, automotive has always figured as an important application area).

His talk was about virtualization in product development, “virtual version of something”, and it can really be almost anything in his view:

  • Hardware virtualization
  • Software compartmentalization – implemented using hardware virtualization
  • A virtual ECU to enable software – classic virtual platform user case
  • Digital twin – entire system virtualized, test security features very early, for example
  • Virtual vehicle/experience in a showroom – taking the digital twin into sales
  • Virtual work, where developers need access to something to run their code on – which can be implemented using a variety of the virtual techniques listed above.
  • Access to virtual models should be easy regardless of where you are – instead of going into an office, you should be able to access a test rig from home (Elektrobit had to have people going into the office during Covid since the test rigs just could not be relocated to the developer’s homes and they did not work using remote access).

He also shared a wonderful image of the kinds of hardware you want to have virtualized:

When discussing virtual platforms, he made some good points about requirements:

  • Want to do shift-left, as expected.
  • Need to have a setup that covers what is need to run interesting test cases.  Like camera inputs to an AI system. Not just basic processing using made-up inputs or unit tests.
  • A good model is to record sensor inputs from driving around in the real world. Such test cases should ideally be shared among vendors via some kind of cloud service.
  • VPs have to built in collaboration with hardware vendors, it is not something users like his company can created on their own. True.
  • Collaboration across the ecosystem and supply chains is needed to build complete models.
  • Different levels of abstraction is needed to effectively cover different use cases.

Keynote: Andreas Reixinger: Software-Defined Future Mobility… Can Open Technologies Help?

Andreas Riexinger from Bosch. He had some bad technical issues getting onto the Zoom call, ended up tunneling a Teams call from Martin Barnasconi (from the organizational committee) to Andreas into Zoom. Which actually worked, amazing recovery by Martin!

The talk was about how open-source can be used in automotive in particular, and specifically for autonomous driving and other future mobility solutions driven by software. Which software makes sense to keep in-house, and which software makes sense to develop in the open? 

In general, basic tools and common infrastructure are good fits for open source. No value or even negative value in developing such yourself, at least if they just match what everybody else is doing.

He used the https://openadx.eclipse.org/ OpenADx project as an example. This is a development tool integration project under the Eclipse foundation that aims to make it easier to build a complete toolchain from different pieces. Some parts are open-source, some are commercial, and some will be in-house. It covers not just coding and compilation, but also how to build co-simulation setups that bring together many different simulators.

Andreas also talked a bit about co-simulation and was very clearly making the case that the goal is to tie together pretty much any kind of executable model. Coming from automotive, it looked like the most common solution he sees is FMI. For example:

Tutorial: Python and SystemC: A dream team for building and analyzing Virtual Platforms

Presented as non-sponsored tutorial. Eyck Jentzsch from MINRES, Thomas Haber from toem, and Rocco Jonack from MINRES. All consultants in the SystemC space.

The tutorial gave an overview of the development and deployment of VPs as well as results analysis using Python as scripting language with SystemC . The main reason of using Python here is its flexibility in reconfiguration and its huge collection of libraries. Python can be used for structural construction, simulation control and dynamic model parametrization.

The presentation includes a recorded video about the practical example of how to construct a top module in Python connects some SystemC models and control some parameters like logging from a Python script. In the second half, it talked about how the simulation result could be analyzed with the rich Python libraries.

It is possible to use Python code as part of a SystemC model. They show how you can use Python libraries like DASH to code a graphical dashboard for simulation results (value prop: reuse existing Python facilities with SystemC simulations).

The code was developed by MINRES originally, and has been donated to and adopted by the Accellera SystemC common practices working group. Its current version is part of the public Accellera repository at https://github.com/accellera-official/PySysC (the MINRES git repo holds an older version, watch out if you google for PySysC that you get to the right repo).

Overall, this use of Python in a virtual platform is quite similar to how it has been done in Simics from the beginnings of the product. Python is simple a good choice for this kind of embedded scripting.

Paper: UVM for Embedded – Open-Source UVM

Paper presented by a team from CoVerify along with Ericsson. CoVerify build open-source verification solutions.  This used be called DLANG, Embedded UVM is the new name. Everything is on github and can be downloaded and executed from https://github.com/euvm/.

 What is Embedded UVM?

  • An implementation of the UVM concepts based on the “D” language  – instead of SystemVerilog
  • Creates binaries that can run on a standard OS, or as shared libraries.  Can run in multicore mode.  ABI-compatible with C/C++.
  • Implements the entire UVM “reg” package.
  • Cosimulation with “icarus verilog” for entirely open-source verification
  • Provides a constraint solver to allow evaluation of directed random tests from UVM. The SystemC-UVM solution does not include that crucial infrastructure. This seems to me to be one of the most complex parts of UVM to implement, and a key value for the EDA solutions.

How are test benches translated from SystemVerilog?  Manual.  The D-language Embedded UVM code is claimed to be “one-to-one” when converted from SystemVerilog, but you have to do it manually. There is no automatic conversion.

Example use cases where Embedded UVM makes sense:

  • Verification of systems like an FPGA with software and hardware interacting with each other.  Provide software developers with the ability to reuse UVM test cases from the hardware unit cases, as part of their software/hardware integration tests.
  • Open-source hardware development like RISC-V. Add tests to pull requests in github open-source. Everything needs to be in open source, since a developer and the CI infra needs to run the same code.  This makes EDA-vendor proprietary tools a non-starter. Statement: “SystemVerilog is entirely closed source

They also talked a bit about what you can do with their Reg package.

One interesting technology combo was using Python macros in Excel to talk to the test bench!  Excel specifies the registers.  Macros associated to cells in Excel, click on a cell in Excel, it will send a packet to the embedded UVM environment and resulting in a hardware read or write! Essentially, using Excel a test execution environment. Wow.

Brave Presenter: Graphcore

The price for the bravest presenter of the conference goes to Svet Hristozkov from Graphcore in the UK. While presenting his paper “No Country For Old Men – A Modern Take on Metrics Driven Verification”, he ran a live demo with code in multiple editor panes during his talk. Very impressive, and it all worked.

This presentation is an example of something that would be a lot harder to do in a physical conference, as it is much easier to have your development setup available with short latencies when presenting from your workplace.

The talk and paper had some interesting points to make. In particular, that the Graphcore team found it quite worthwhile to develop and maintain their own infrastructure for verification. While they do use EDA tools to run UVM-based testbenches on RTL, a large part of the test preparation and debugging, as well as results processing, is performed using their own tools. This provides them with more flexibility, better performance, and lower license costs. In particular, they gain performance by extracting raw monitor data from an EDA RTL simulator and then doing their own post-processing to turn it into actual coverage information. In a complete EDA tool, the coverage processing is done in the same process as the RTL simulation. In their approach, they could parallelize it.

Interesting take on build vs buy.

Conference Systems

Like 2020, the conference was managed used the Conflux virtual conference system. Talks were presented over Zoom. The conference used a Mozilla Hubs-based virtual environment for networking between sessions, as well as for the poster sessions.

Most of the presentations at DVCon Europe were live over Zoom, but some showed recorded videos only. In my opinion, the energy and engagement were clearly higher for the live sessions (which is something I have written about before). Attendees kind of sigh when something is “just recorded”.

Questions and answers sessions took place in the Zoom chat after each presentation. The idea was to make this continue in the virtual environment, but that just did not happen.

a Example of the virtual environment during a keynote presentation session.

The virtual environment was the only system for the poster session, and at that point some 50 people joined in. The system itself worked very well, but it just did not catch on with the attendees. It was possible to view keynotes and panels in the virtual world as well, but only very few people attended that way.

A lonely poster in the virtual poster session. You can click on the poster to download it. For most posters, the poster presenter was also present in the virtual environment to answer questions.

Next Year – Back in Style (?)

For next year, the plan is clearly to make DVCon Europe in-person once again. While bringing along the best bits we have learned from two years of being virtual. Apparently, everyone else has the same idea, so booking conference venues might be a bit on an interesting exercise.

2 thoughts on “DVCon Europe 2021 – Testbenches, AI, and Open Source”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.