This is a short story from the world of virtual platforms. It is about how hard – or easy – it is to model a simple and well-defined hardware behavior that turns out to mercilessly expose the limitations of simulation kernels.
Continue reading “The Event at the End of Universe”Tag: simulation
Intel Blog: Parallelizing a Virtual Platform Model
There are many ways to use threading and parallelization to improve the performance of virtual platforms. It is not always easy to successfully use parallelization – it very much depends on the nature of the workloads and model setup – but when it works it can really help. I recently published a long blog post at Intel, detailing an idealized example of threading for a device model that is shipping in the Simics training package.
Continue reading “Intel Blog: Parallelizing a Virtual Platform Model”DVCon Europe 2023 – 10th Anniversary Edition
The 2023 DVCon (Design and Verification) Europe conference took place on November 14 and 15, in the traditional location of the Holiday Inn Munich City Center. This was the 10th time the conference took place, serving as an excuse for a great anniversary dinner. Also new was the addition of a research track to provide academics publishing at the conference with the academic credit their work deserves. This year had a large number of papers related to virtual platforms, so writing this report has taken me longer than usual. There was just so much to cover.
Continue reading “DVCon Europe 2023 – 10th Anniversary Edition”The ESA Schiaparelli Crash & Simulation
Back in 2016, the European Space Agency (ESA) lost the Schiaparelli Mars lander during its descent to the surface on Mars. From a software engineering and testing perspective, the story of why the landing failed (see for example the ESA final analysis, Space News, or the BBC) is instructive. It comes down to how software is written and tested to deal with unexpected inputs in unexpected circumstances. I published a blog post about this right after the event and before the final analysis was available. Thankfully, that has since been retired from its original location-it was a bit too full of speculation that turned out to be incorrect… So here is a mostly rewritten version of the post, quoting the final analysis and with new insights.
Continue reading “The ESA Schiaparelli Crash & Simulation”Simulating Computer Architecture with “Mechanistic” Models – No more 100k Slowdown?
I have been working with computer simulation and computer architecture for more than 20 years, and one thing that has been remarkably stable over time is the simulation slowdown inherent in “cycle accurate” computer simulation. Regardless of who I talked to or what they were modeling, the simulators ran at around 100 thousand times slower than the machine being modeled. It even holds true going back to the 1960s! However, there is a variant of simulation that aims to make useful performance predictions while running around 10x faster (or more) – mechanistic models (in particular, the Sniper simulator).
Continue reading “Simulating Computer Architecture with “Mechanistic” Models – No more 100k Slowdown?”Simulators in Racks at the Embedded World 2018
I work with virtual platforms and software simulation technology, and for us most simulation is done on standard servers, PCs, or latptops. Sometimes we connect up an FPGA prototype or emulator box to run some RTL, or maybe a real-world PCIe device, but most of the time a simulator is just another general-purpose computer with no special distinguishing properties. When connecting to the real world, it is simple standard things like Ethernet, serial ports, or USB.
There are other types of simulators in the world however – still based on computers running software, but running it somehow closer to the real world, and with actual physical connections to real hardware beyond basic Ethernet and USB. I saw a couple of nice examples of this at the Embedded World back in February, where full-height racks were basically “simulators”.
Continue reading “Simulators in Racks at the Embedded World 2018”
Talking at the Embedded World 2018
I will be presenting an Exhibitor Forum talk at the Embedded World in Nürnberg next week, about how to get to Agile and small batches for embedded. Using simulation to get around the annoying hard aspect of hardware.
More details at https://software.intel.com/en-us/blogs/2018/02/19/embedded-world-getting-agile-with-simulation
Simulation in the IBM ACS Project – Current Practices in 1966
I once wrote a blog post about the use of computer architecture pipeline simulation in the IBM ”Stretch” project, which seems to be the first use of computer architecture simulation to design a processor. After the ”Stretch” machine, IBM released the S/360 family in 1964. Then, the Control Data Corporation showed up with their CDC 6600 supercomputer, and IBM started a number of projects to design a competitive high-end computer for the high-performance computing market. One of them, Project Y, became the IBM Advanced Computing Systems project (ACS). In the ACS project, simulation was used to document, evaluate, and validate the very aggressive design. There are some nuggets about the simulator strewn across historical articles about the ACS, as well as an actual technical report from 1966 that I found online describing the simulation technology! Thus, it is possible to take a bit of a deeper look at computer architecture simulation from the mid-1960s.
Continue reading “Simulation in the IBM ACS Project – Current Practices in 1966”
Intel Blog Post: Getting to Small Batches in System Development using Simulation
I have posted a two-part blog post to the public Intel Developer Zone blog, about the “Small Batches Principle” and how simulation helps us achieve it for complicated hardware-software systems. I found the idea of the “small batch” a very good way to frame my thinking about what it is that simulation really brings to system development. The key idea I want to get at is this:
[…] the small batches principle: it is better to do work in small batches than big leaps. Small batches permit us to deliver results faster, with higher quality and less stress.
Continue reading “Intel Blog Post: Getting to Small Batches in System Development using Simulation”
Intel Blog: Continuous Delivery for Embedded Systems and how Simulation can Help
Doing continuous integration and continuous delivery for embedded systems is not necessarily all that easy. You need to get tools in place to support automatic testing, and free yourself from unneeded hardware dependencies. Based on an inspiring talk by Mike Long from Norway, I have a piece on how simulation helps with embedded CI and CD on my Software Evangelist blog on the Intel Developer Zone.
Intel Blog: Using CoFluent to Model and Simulate Big Data Systems
Intel CoFluent Technology is a simulation and modeling tool that can be used for a wide variety of different systems and different levels of scale – from the micro-architecture of a hardware accelerator, all the way up to clustered networked big data systems. On the Intel Evangelist blog on the Intel Developer Zone, I have a write-up on how CoFluent is being used to do model just that: Big Data systems. I found the topic rather fascinating, how you can actually make good predictions for systems at that scale – without delving into details. At some point, I guess systems become big enough that you can start to make accurate predictions thanks to how things kind of smooth out when they become large enough.
Wind River Blog: The Trinity of Simulation
There is a new post at my Wind River blog, about the Trinity of Simulation – the computer, the system, and the world. It discusses how you build a really complete system model using not just a virtual platform like Simics, but you also integrate it with a model of the system the computer sits in, as well as the world around it. Like this:
Read more about it in the blog post, and all the older blog posts it links to!
GPGPU for Instruction-Set Simulation – Maybe, Maybe not
I just read a quite interesting article by Christian Pinto et al, “GPGPU-Accelerated Parallel and Fast Simulation of Thousand-core Platforms“, published at the CCGRID 2011 conference. It discusses some work in using a GPGPU to run simulations of massively parallel computers, using the parallelism of the GPU to speed the simulation. Intriguing concept, but the execution is not without its flaws and it is unclear at least from the paper just how well this generalizes, scales, or compares to parallel simulation on a general-purpose multicore machine.
Continue reading “GPGPU for Instruction-Set Simulation – Maybe, Maybe not”
Gary Stringham on Hardware Interface Design vs Virtual Platforms
I just read an interesting paper from the 2004 Embedded System’s Conference (ESC) written by Gary Stringham. It is called “ASIC Design Practices from a Firmware Perspective” and straddles the boundary between hardware design and driver software development. It was good to see someone take the viewpoint of “how you actually program a hardware device is as important as what it does”. Gary seems to understand both the hardware design and implementation view of things, as well as that of the embedded software engineer. To me, that seems to be a fairly rare combination of skills, to the detriment of our entire economy of computer system development.
Continue reading “Gary Stringham on Hardware Interface Design vs Virtual Platforms”
Cadence-Ran vs Synopsys-Frank over Low-Power and Virtual Things
Over the past few weeks there was a interesting exchange of blog posts, opinions, and ideas between Frank Schirrmeister of Synopsys and Ran Avinun of Cadence. It is about virtual platforms vs hardware emulation, and how to do low-power design “properly”. Quite an interesting exchange, and I think that Frank is a bit more right in his thinking about virtual platforms and how to use them. Read on for some comments on the exchange.
Continue reading “Cadence-Ran vs Synopsys-Frank over Low-Power and Virtual Things”