
DVCon Europe 2025 took place on October 14 and 15, with the SystemC Evolution Day on the 16th. I was there, running around talking to people, taking photos, and listening to presentations. Just like it usually is. Great conference, great fun, great learning! The conference dinner was back by popular demand, and there was a lot of virtual platforms.

And as usual, there was just too much content crammed into too little time and too many people to talk to and companies exhibiting. There is no way for me to do justice to it all. If you are missing something from this report, it is because I simply missed it.
The first day was tutorials, and the second day papers and the panel, following the same structure as it has had in the past few years. There were 20 tutorials and 50 papers accepted for presentation, and about 350 attendees. The conference dinner was back, held at the end of the first day. And the second day just start all too early.

Themes
Virtual platforms were unusually well represented. There were enough papers on the topic to basically create a whole track on the papers day. Plus quite a few tutorials addressing various aspects of virtual, from architecture studies to classic virtual platforms to RTL-virtual hybrids.
Open source. There was a keynote about open source in general from Amanda Brock, and several sessions dealing with open-source design tools. Open source seems particularly common for virtual platforms, but it definitely applies also to “classic” EDA. Several universities are basing their hardware design classes entirely on open-source tools. A special case of open source is the Qemu simulator. It is used as a source of instruction-set simulators in many virtual platform setups where you don’t have your own instruction-set simulation tooling available.
There was quite a bit of talk about sharing simulation models. Maybe using open source, but also just being able to reuse binary commercial models across different simulation platforms. A related question is the old favorite of standardizing interfaces for virtual platforms, beyond SystemC TLM. Here we also had several different companies presenting different ideas on what could be done.
Model sharing ties in to federated simulations. I had a paper on the topic, for example. There was a surprising number of papers using the FMI standard to hook things together. Including running “level 4”-style simulators as part of “level 3” simulators and similar crossings of abstraction-level boundaries. FMI is just one technology, and there were papers dealing with quite a few different approaches to communication, time synchronization, and orchestration of combos of simulators. The SystemC evolution day had a session on the in-development Accellera Federated Simulation Standard (FSS).
Automotive was well-represented. We had the keynote by Ralph Schleifer from CARIAD. Many attendees came from automotive chip providers, Tier-1 suppliers, and even the occasional OEM! I am also pretty sure I heard the Prostep IVIP simulation levels mentioned many more times this DVCon than in the past. That could be due to my own perceptive biases having changed as I now work closely with automotive.
Finally, you cannot escape AI. While not as pervasive as at this year’s DAC or last year’s DVCon Europe, it is still a hot topic. It was particularly evident when the AI-themed tutorials were overcrowded and standing-room only. The SystemC modeling challenge had several AI and ML-based approaches in addition to classic approaches.
Keynote: Open Source, with Amanda, OpenUK
Amanda Brock is a lawyer. She started working for Canonical in 2008, and since then she has been in the open source ecosystem. Today, she is the CEO of OpenUK, https://openuk.uk/

Amanda went through a number of topics around how open source works today.
Open source is not just about code. OpenUK is working with three opens:
- Software
- Hardware
- Data
- And now we probably have AI – i.e., open-source AI models
Generations of open-source users. One very interesting observation was that there seems to be three generations of open-source adopters.
- Gen 1. The old true believers. See OSS as socio-political movement. Example software: Linux.
- Gen 2. Got into OSS for the technology. Example: Kubernetes
- Gen 3. Being brought in by AI and Quantum.
Commercial use of open source?
Today, 96% of all commercial code bases have dependencies on open source. It was much more risk-averse when Amanda started in open source 18 years ago. Back then, it was quite easy to block open source in companies with a reference to the risk inherent in bringing it in. Today, that is not the case anymore.

Amanda referred to “The open source definition”, https://en.wikipedia.org/wiki/The_Open_Source_Definition . In her telling, the most difficult ones for companies to accept when contributing to open source are number 5 and 6:
- 5 – No discrimination against persons or groups.
- 6 – No discrimination against fields of endeavor, like commercial use.
Since they mean that the code they contribute can be used by competitors for their own benefit. If you have a license that locks out certain users or usages, it is not open source. Instead, we see quite a few “public source” licenses in use.
Automotive is slowly moving towards more open-source solutions, which agrees with my observations from the past year.
The cloud is basically built on open-source software.
Pizza model.
Amanda offered a nice model to explain open source to management and especially to politicians. “The Pizza model”. Basically, open-source software is the base for a lot of different toppings – like AI, machine learning, cloud, internet, etc. All of these build on open source, but the applications themselves might not be software in the traditional sense.

EU Sovereignty.
- There is an interesting dynamic with some of the EU work on digital sovereignty. There has been noises made, apparently, about building software stacks that are all European. Which is a problem for open source, where contributors can come from anywhere. Rejecting open-source projects due to US participation makes no sense.
- Initiatives like EuroStack have to accept that Open Source is international.
- Amanda also noted that US players have become more and more annoyed with the EU since 2018, as they perceive the EU strivings in digital to be anti-US.
Relicensing.
There is a rising number of companies that are trying to re-license their code to get out of fully open-source legacies.
- The use of OSS from cloud companies, realizing huge value with little value back to the software creators (see open source definition discussion above, that is part of what you get) – one example is ElasticSearch.
- Companies might just view open source as a great marketing strategy, to get people interested. MongoDB has mentioned this. It also goes to the discussion on “open washing” where things are said to be open even if they really are not.
Amanda also provided a nice visual guide to the gradations of openness:

AI and open source.
- “OpenAI” stole the obvious name for a true open-source AI organization.
- The Llama-2 license is a good example of not-actually-open open source. As the license has limitations saying that if you have enough users for a solution built on Llama, you have to get a commercial license from Meta. See above, paragraphs 5 and 6, again.
- China’s Deepseek is the second most important contribution to open AI. It was freer than Llama, and the model was shared alongside really good docs. Still, the training data was not shared. Still, it is much easier to build on it than previous models.
What about hardware?
- CERN has an open hardware license for sharing hardware designs. https://cern-ohl.web.cern.ch/
- RISC-V is just a specification, it is not actually open hardware.
Journalist, Me?
One funny thing that happened during Amanda’s keynote was that she asked if I was a journalist. Since I was busy taking photos with an obvious large camera and taking notes on a notepad. No I am not, but writing a blog post like this does get close, doesn’t it?
Keynote: Automotive Simulation, with Ralph Schleifer, CARIAD
The second day’s keynote was provided by Ralph Schleifer from CARIAD (the VW software subsidiary). Ralph used to do virtual platforms in mobile comms at Ericsson, now doing it for automotive. CARIAD is at the same level of the VW group as the brands.

In a way, this was a continuation of the 2017 keynote by Berthold Hellenthal from Audi. What has happened since then? Which wishes and predications turned out to be true? Even more interesting, Ralph found a 1999 presentation about simulation in automotive.
First of all, Ralph noted the well-known fact that car customers are changing – which means that the car companies also need to change the products they build and what they count as crucial product features. For example, in China, a car is seen a digital system on wheels. Classic car aspects like horsepower are not as important. Software is a key part of this digitalization.
The software that CARIAD develops can be broken down into four stacks:
- Experience stacks
- Driver stacks
- Cloud stacks – of which not much was said as it is quite different
- Motion stacks
The question for Ralph is just how to use simulation to aid and speed up the simulation of these stacks. The reasonable strategies for simulation will vary depending on the nature of the software

Experience stacks.
- IVIs, digital in-car experience.
- Software-driven development/test/integration, mostly decoupled from hardware-specific features. Development focuses on user-facing applications.
- Android application simulation is a solved problem. Hardware-independent mostly. “Do not need a SystemC simulator of the hardware” to work on such apps.
- Connecting to the rest of the car is not a solved problem.
- Simulation needs APIs to get to the rest of the vehicle to access speed, state of windows, seat occupancy, …
Driver stacks.
- Automated driving. Driver support. Object detection.
- Key problem is algorithm development.
- Abstracts specific HW features, just like IVI. There is no need for RTL-simulation of an image processor, trust the hardware to be doing its job right.
- Performance is critical – too-slow simulation does not help. The key is to get through actual data and check the results.
- For simulation, replaying scenarios through algorithms is a key feature.
- It does require SoC-level simulation for integration and validating the eventual software – but not for the algorithm work itself. But since the code relies on pulling in sensors from all over the car, it must be possible to check the connectivity and software-sensor relationship.
Motion stacks.
- This is more like “classic” automotive software. Lower compute complexity. Focus on safety and security.
- Using RTOS and AUTOSAR. Trend is towards non-AUTOSAR real-time operating systems.
- Heterogeneous functions, due to centralizing functions into Zonal controllers.
- Driven by hardware and it is necessary to simulate microcontrollers. The microcontrollers are not as hard to simulate as the HPC chips used for the experience and driver stacks.
- Timing-critical – control algorithms that work on 10ms or shorter intervals
- Need to simulate CAN, CAN-FD, Flexray, Ethernet, LIN, SENT, and other buses.
- A simulation slowdown of 100x is not a killer, since you are simulating a few seconds typically.
Overall, it has been hard to build a simulation environment that solves all the disparate use cases. Performance and model availability are particularly tricky.

Ralph also talked about the environment in which software is developed. Breaking it up into a number of dimensions:
- Classic OEM vs new player – how much legacy?
- Group of brands vs single brand – how many identities?
- Large number of variants vs a few variants? More variants make life harder. For example, in some case of “80% of the vehicles worked fine” – but the 20% had a configuration that broke the software.
- Degree of in-house SW development. It is easier to fix bugs if you have more of the software in house (or at least in source code).
- World-wide market vs focus on a few regional markets. There are different standards and rules in different markets, and it can be hard to find a single solution that works for all of them.
- Legacy and compatibility vs greenfield. If you have a lot of legacy, why change old things that work? But that does bring in complexities. To simulate a car, you would “need a simulation for some old wiper ECU”.
Then, time for the look back. Manfred Thanner found some 1999 slides on “virtual garage” concept that did indeed mention “Virtual ECU” and “network-level simulation”. That was more than 25 years ago – but those concepts are reality today.

In 2017 Berthold Hellenthal (Audi) made some important points:
- Need new processes across the whole value chain to create virtual vehicles. It is not just for the OEMs to do.
- The goal of simulation is higher development speed with lower risk.
- You also needed a positive business case – how do you prove the ROI on the simulator? When looking to fund new simulation projects, Ralph hears a lot of “this is a good idea, BUT….” Problem is, you have to invest before getting ROI!
How did it go? Overall, translating semiconductor industry practices to Automotive did not work. The domains are too different, and virtual prototyping is a different problem in Automotive compared to in chip design. I can agree with that, even though I think the technological underpinnings are quite similar.
One thing that has happened since 2017 is that we are moving towards an established terminology. I.e., the 2020 ProStep IVIP whitepaper defining level 0 to level 4 virtual ECUs. It did not look like the audience was all that familiar with it. Ralph also mentioned that the “level 4” that encompass classic virtual platforms might need splitting up, allowing for some of the software to be changed. Agree on that one.
How does Automotive compare to mobile communications industry where Ralph used to work? One key difference is what is standardized. In automotive there are hardly any standards relevant to software development and ECU development. There are many standards for the vehicle. This makes model reuse and model integration harder than it should be.
As a contrast, back in 2015, he did a simulation of an Ericsson ASIC and tested the lowest levels of a data call from the ASIC to a test equipment from Rhode&Schwartz. The ASIC and the test equipment were built independently from the same standard, and it just worked when you connected them. Due to having strong standards for how to exchange traffic. You cannot do the same in Automotive today – which might be a bit unfair, as telecommunication is all about compatibility in communications. The equivalent of to a data call is maybe “wheels meet the road”…
When it comes to semiconductor development in Automotive, the results are mixed. Some OEMs are working on custom chips, while others have tried and decided not to. Ideally, one would like to move from “look at data sheets for chips and select the ones that match best” to “involve ourselves in hardware development”. But that is not easy to do.
- Mercedes Benz – started company in California to define chip-let based hardware, with Athos
- CARIAD & ST collaborate on development of a chip. It was started but did not result in anything.
Co-simulation or federated simulation is making progress. Mentions Vector SIL Kit as one example (I presented a paper about said standard), and the new FMI version 3 standard with its bus simulation layer. There is also the federated simulation standard being worked on in Accellera.
Simulation technology for the chip level is also moving forward. One particular example is the availability of Arm native execution – i.e., using Arm hosts to speed up Arm-target simulation. That did not exist in 2017 (but I have to remind the reader that the Intel Simics tool did have X86 host acceleration for X86 targets since 2005).
In the end, what is really needed is a simulation of a complete car. But that goes counter to the supply-chain model of automotive that is very vertical and distributed. Whole-system integration comes late in the process, as only then are all pieces available. However, that is not ideal. You want to test the network of ECUs early on, without applications. I.e., working a higher levels of abstraction where appropriate (as already mentioned).
Yet another aspect is the orchestration and automation of integrated simulations, how to handle the complexity of big setups.
He feels the state today is that you can simulate a single slice, a single ECU. Harder to do a complete car. He provided on concrete example of modeling just an SoC and a board, using a level 4 virtual platform.
- They modeled a board with smart fuses, connected to a standard ECU. The project focussed on the smart fuses.
- Need to have simulation to feed faults etc.
- Basic model just injects values to be read by software.
- Moved to SystemC AMS to model electrical aspects more accurately. Exchanging the detailed model for the abstract model where it makes sense.
- The SoC model came from an EDA vendor. It has interfaces to connect different things. But how does it connect to the fuse model? They would have liked standard board-level interfaces for the connections to the smart fuses that was not SoC-model-vendor-dependent (interoperability was a common thread throughout DVCon Europe 2025).
- Still, this was a successful project that was used to build a virtual HIL for a team starved for hardware test benches. Using the same plant model as the real-world test. Exactly the same test cases for the simulation.
Panel: Beyond the Chip: How Ecosystems Are Shaping the Future of System Design

Participants:
- Moussa Belkhiter, ST Microelectronics MDRF Group. Vice-President Central R&D.
- Andrew Dellow, Qualcomm. RISC-V ambassador. Director in the standards group at Qualcomm.
- Ralph Schleifer, CARIAD (return of the keynote speaker).
- Sara Vinco, Politecnico di Torino. Professor.
- Moderated by Axel Jahnke, Nokia (who gave a keynote at DVCon Europe in 2022).
The panel topic was “Beyond the Chip: How Ecosystems Are Shaping the Future of System Design”, which in hindsight turns out to be a bit too broad. But a couple of interesting observations did come out of it.
Overall, the panel became a discussion about models and modeling for complex systems built from parts from many different vendors and groups.
Academia really would like more complicated setups to do research on. Referring to the system simulations that Ralph talked about in his keynote, how could such setups be created or available in academia? Not an easy question to answer.
The legal overhead of sharing models can dominate timelines. With piles of NDAs to sign, it can take months just to get all companies to agree to share even fairly basic information.

When it comes to models, a common theme was that it took too much time to obtain them. Often due to the fact that they might have to be re-developed for each unique simulation or design environment. It might be easier to develop a new model than to try to share an existing model (see previous point).
Ideally, the panelists would like to have open-source models that can simply be picked up and used. However, it is very unclear just how this could happen. In particular, how to deal with the tricky problem of hardware that is documented under NDAs. I guess it would work if the IP vendors provided the models, but so far that is very rare. You could work out some of the hardware function from open-source drivers, but such models will be incomplete. Purchasing could require it, which I think makes more sense.
Maybe the solution is to go to entirely open-source hardware? If the hardware is open source, models would be too, and it would all work out. At least if the open-source hardware is well-maintained and companies pay for the work. Open source does not mean free (“free as in beer”), too many companies think that.
Andrew Dellow talked about the possibility of open-source hardware distros analogous to Linux distros. Maybe that is a reality a decade from now? Something that companies maintain together?
Andrew also pointed out something interesting at the end: open-source is a problem for getting systems certified. Being closed source supposedly gets you more points. Even if open-source in theory gives you the possibility of more scrutiny on the code.
Virtual Platforms
As already mentioned, virtual platforms were big this year at DVCon Europe. We had several tutorials and papers on the topic, and I will try to give a quick rundown of the highlights.
Architectural Modeling with Virtual Platforms
The first tutorial I attended was by Rocco Jonack from MinRes, currently consulting for Arteris, and Matthias Jung, from the University of Würzburg. They talked about how to model an SoC including an Network-on-Chip (NoC) and a DRAM interface. The NoC was from Arteris and modeled using their NoC simulator. DRAMsys was used to model the DRAM subsystem, from controller out to the DRAM modules.

The tutorial was about measuring performance of the resulting system, which requires operating at the SystemC TLM AT level. Arteris provides an architectural models for their NoCs for this purpose, and DRAMsys works best when fed traffic from an AT model. Trying to use a fast simplistic system model with a precise model like DRAMsys is not likely to result in particularly useful numbers. Indeed, it is often better to put traffic on the NoC using dedicated traffic generators rather than running software on processor models. Early on you are not likely to have final software anyway.
Matthias made a point that needs repeating: you need to know quite a bit about DDR and DDR mappings to apply DRAMsys – and to design systems using DDR in the first place. You need to know how to set up memory interleaving rules and queuing in your DDR controller, how the DRAM modules are attached, and how banks and channels are distributed. This is a bit of a bother for researchers that just want a memory system to attach to their research simulator. You need to dig into more details than you might want. Less of an issue if you are designing an actual chip, where you have concrete details. But “generic DDR studies” are hard to do.
Rocco and Matthias also showed a variety of tools to capture and analyze performance and simulation data. DRAMsys comes with a tool that can show causal chains among memory operations, very useful to analyze bottlenecks in DDR specifically.
Testing and Fuzzing on Virtual Platforms
Machineware, Arm, TraceTronic, and RWTH Aachen presented a joint tutorial that covered the Machineware VCML modeling kit, the Arm Fast Models, and a couple of applications.

Mirroring what Ralph would say the next day in his keynote, simulation performance was called out as a key property of virtual platforms. In addition to the transaction-level abstraction that we all use, Daniel Owens from Arm talked about the breakthrough of Arm Native Execution (also mentioned above in this blog).
It should be noted that for architectural reference models like the Arm Fast Models, using native execution is not necessarily all that easy. If you want to simulate the precise details of an Arm v9.7A core, for example, and the hosts you have are on v9.0A, it is necessary to use classic instruction-set simulation techniques.
Matthias Berthold from Tracetronic can in from automotive to talk about using standard software testing tools with virtual platforms. Using virtual makes it possible to automate testing at scale – and using fault injection helps fulfil safety requirements. Tracetronic connected a virtual platform to their ecu.test tool using the FMI FMU interface, essentially loading the VP into ecu.test (in practice, the Machineware implementation of FMI puts the simulator on a separate process with a stub running as an FMU in the host simulator).
It should be noted that one effect of scalable and automated testing is that you get more test runs. Which means more test reports. Which means tools have to get better and handling large volumes of test reports – similar to what the previous tutorial said about analysis tools, actually. Furthermore, scaling testing requires compute resources, and if you are using cloud providers to run your test loads, scaling can get expensive real quick. There is no free lunch.
Chiara Ghinami, a PhD student at RWTH Aachen, presented some work on fuzzing embedded code using a virtual platform. They attacked a CAN driver in the Zephyr network stack, using the CAN frames the payload to fuzz and collecting code coverage from the VP to drive an AFL++ fuzzing stack. Very familiar, I presented something similar in 2023.
A key question in fuzzing is how you reset the target system. In this case, they just restart the VP each time. The tests were apparently short enough and the simulator fast enough that this was not a big problem. Using some kind of checkpointing would be nice, but as we know that is an open problem for SystemC. Lukas mentioned that they are seeing some success with process-level save and restore.
Remote GPU simulator

Przemysław Mikluszka, from Imagination Technologies in Poland presented a cool idea. They are working with the Imagination GPU simulator, which is running “too slow” on a developer’s own machine. Therefore, they want to run the GPU simulator on a remote server but keep the rest of the simulator on the local machine.
The starting point is an existing API that they have been using to integrate their simulator to Qemu and Gem5 in the past. Changing the API is not feasible as there is a lot of tooling that depends on it. A particular aspect of the API is that the memory of the simulated device is handled outside of model, and the model is provided with a memory access API. This is a critical problem for remoting the GPU simulator, as it will be making memory accesses very often.
They turn their GPU simulator into a network server. Adding the network in the middle… has a huge impact on performance. With the client and server on same machine: runs took 4 x longer.
With a remote server, across a real-world over network with 60ms latency: total performance disaster, runs took many times too long. That is a very long network latency. In my SIL Kit experiments, 10 ms was enough to tank performance. They tested 60-90-120ms network latencies, linear increase in overhead! That is also interesting, as you might have expected super-linear overhead.

The solution is to implement what is in effect a memory cache on the GPU server side, transferring blocks of memory back and forth instead of doing individual memory accesses across the network. Essentially, implementing a coherent cache in software, with prefetching based on their knowledge of the GPU and its driver. With this performance improved, but local runs are still faster.
FMI in SystemC
Sara Vinco, from the Politechnico di Torino. Work performed together with Dumarey Softtronix (who provided problem and project funding). They implemented a way to export a SystemC virtual platform as an FMI FMU, for use inside other simulators. In particular, inside L3 simulators.

They made a claim that FMI rarely used in VPs – that is not true I think, that feature has been around for a decade in VLAB. However, it is hard to tell what is going on in industry.
The FMU they create contains the target SystemC TLM device with a single blocking target socket (expanding to more sockets is a future project). Their system automatically adds a TLM initiator that drives the TLM target when data arrives from the outside. And then there is an FMU Wrapper around the initiator + target, which is loaded into another simulator.
The SystemC subsystem is run for a certain amount of time, each time the FMU is invoked from the containing simulator. This was discussed in the questions session – this strategy might run into trouble in case the included SystemC model uses wait.
They tested it together with another FMU in the same simulation. Still want to test to see what happens if you mix RTL and SystemC inside the same FMU. Note that they have not tested integrating an ISS into the FMU.
Reusing Models across Simulators
Torsten Hermann, Aumovio, presented a paper on how they want to reuse the same board-level component simulators across different L4 and L3 simulators. This is a recurring problem that was discussed many times at the conference, from different perspectives.

This particular work is about simulating a single ECU, not a whole vehicle. They want to validate the embedded software, not the hardware. And to do that, they need board components outside the main SoC or controller on the ECU, with the board components talking to a plant model. The key problem is seeing how the software interacts with the surrounding world. For example: software programming a MCU PWM to a certain duty cycle and seeing how this drives a motor to wipe windows or open the trunk. They want to keep the application software as-is, and the plant model as-is, with different simulation systems in between the two.
Software can be run on either L3 or L4. This is the key problem, as those levels of abstraction work quite differently from each other. Also, all simulators have their own way to hook up more models. Thus, they are designing a framework to hook up simulator-independent reusable models to any L3 or L4 simulator they might use.

Their solution is to go down to the pin level interface of the hardware components. This is considered common to L3 and L4. Thus, their model is that peripherals communicate via named pins. A pin has a name, direction, signal type. For the board level, there is a wiring description showing how devices are connected. Then, you have an adapter from the board-level framework to any specific simulator used to run the software.
There was a long discussion about the particulars of the semantic model for the board components (partially after the end of the session). Essentially, the “wiring harness” defines its own simulation semantics.
The current design is that the pins of all models are mapped to boundary variables. New values are written into these boundary variables. Once all inputs are set, the system calls all modules in a fixed order. The order is determined by a dataflow analysis, starting from SoC output ports around back to SoC input ports. Activity is triggered either from the SoC model or the plant model.
This means that there is no spontaneous activity from the devices on the board, especially no activity triggered by time or posted events. The authors estimate that they can find 80% of all bugs without timing/event posting on the PCB side. Whether to add timing to board models is an active area of discussion.
Modeling a RISC-V Radar System
Michael Atzmuller from Infineon presented this paper, written in collaboration with the Johannes Kepler University in Linz.
They are working on the control (“sequencing”) part of a radar chip. This sequencing turns on power amplifiers for specific antennas, etc., over 200 different control signals. Need to be controlled very accurately in time.
Their old design uses a specialized sequencer that reads sequences of operations from RAM. In essence, a minimal processor with a highly-specialized instruction set that is very limited in terms of control flow. Operations are converted into FIFO entries that contain timestamps + values to be written, and those FIFOs drive the actual hardware. But it has limited expressive power, and no ecosystem for debug etc. To update the operations available, have to modify the hardware, since everything is very specialized.

In this work, they tested a new solution where they replace the custom processor with a RISC-V-based solution. They hook up the FIFOs from their existing solution to CSRs in the RISC-V core.
The benefit of RISC-V for the product is that it makes much easier to implement new features, since they now have a fully general processor instead of their limited custom instruction set. Furthermore, they can implement functionality using C or C++, with standard tools and debuggers. It makes it easier to write code.
In terms of modeling: the RISC-V core exists in RTL already, including the customization to talk to the hardware backend. They used took the RTL for the core and converted it to SystemC using Verilator. They then ran this with a SystemC model of the rest of system to validate functionality and test new functions. The details were a bit scarce, to be honest. But the key point is the combination of processor RTL for absolute accuracy with a test platform that is also very timing-accurate.
Virtual Platform Networking with SIL Kit
I had a paper at DVCon myself, about how we simulate networks between VLAB virtual platforms using the open-source Vector SIL Kit library.

Varying the SIL Kit time-sync time and simulator time quanta revealed some interesting facts about the relationship between network behavior, software behavior, and simulator performance. In essence, if network latencies seen by the software become too long, software will tend to change its behavior. Changed software behavior can result in a simulation running for a much shorter – or longer – time.

Thus, what looks like success in improving simulation performance can come from the software not doing the same thing. And if a simulation suddenly takes a very long time, it might be because the software changed its behavior rather than from tuning simulation parameters. The time to finish a run is not necessarily correlated to simulation performance, in case the software behavior changes.
The full paper will appear on https://dvcon-proceedings.org/ in early 2026 (I think).
Exhibition
For the first time in many years, I had a product exhibiting on the floor. We had VLAB slides shown in the Cadence booth. Good feeling.

Odds and Ends
In one of the tutorials, it was mentioned that the new Synopsys Zebu 200 series was based on a certain “AMD Versal Premium VP1902” FPGA. This is a device optimized for emulation and prototyping applications! Not something I expected to happen, but it does make sense as the market for such technology appears to be growing – including from home-brew solutions.

While sampling the proceedings, I came across a paper from Undo that made a novel point about debug. Too bad I missed the session, but you can only be in one place at once. Of course, Undo sells a time-traveling debug tool to solve tricky bugs. The paper showed how this could be applied to debug hardware – when said hardware is designed using high-level synthesis from C and C++ code! Of course you can, but I never thought about that benefit from HLS in the past. The source code is in “standard” programming languages and thus get better tool support than the more niche classic RTL programming languages.
Give-Aways
Note that this only covers the companies I talked to. I always like walking the exhibition to see what kind of clever or tasty give-aways companies are supplying. This DVCon was no exception. Here are my personal favorites:

AMIQ had a clever slogan on their puzzle give-away. “Untangle tricky problems”. AMIQ also sponsored the conference dinner, where they gave away luggage tags to all attendees. Actually a very useful item, and I took a few left-over ones home to use!

The best give-away I got with me was a t-shirt from ChipRight. Nice functional material, and I like the way the print used two colors. Having printed some hoodies for the dinner, I now know how slightly non-trivial it is to get good multicolor prints… The warm hats were a nice touch too, given that we are going towards winter.

Right next to them, Lubis EDA were giving away nice little gummibears. And gym towels, which looked really nice. A bit of a trend here, with the exercise-focused t-shirts from ChipRight.
Conference Dinner
The conference dinner was back by popular demand.

We introduced the dinner in 2023 as part of the tenth anniversary celebration for DVCon Europe. It was not part of the program in 2024, which led to negative feedback form attendees who missed it. Thus, the dinner was back in 2025, and it was arguably even better this year. We had more people attend, and Mark Burton had arranged to have a string quartet entertain us.

I have already published a blog with the lyrics of the custom song I performed and some more details on the entertainment.
