Last week was spent at the Design Automation Conference (DAC) in Las Vegas. I had a presentation and poster in the Designer/IP track about Clouds, Containers, and Virtual Platforms , and worked in the Intel Simulation Solutions booth at the show floor. The DAC was good as always, meeting many old friends in the industry as well as checking out the latest trends in EDA (hint: same trends as everywhere else). One particularly nice surprise was a book (the printed type, not the Vegas “book” that means something else entirely).
I won an award for Designer Track Best Presentation in the Embedded Systems and Software category! This was really entirely unexpected, and by pure luck I could actually attend the short awards ceremony. My flight left Vegas at 11.24, just late enough that I could attend the awards portion of the Thursday program and then quite literally run for a taxi to the airport.
Cloud was really a hot topic this year (see below), which might explain the win. Doing the presentation itself was real fun, and I managed to set a personal record of 17 slides in 13 minutes while staying entirely coherent and having time for two questions before leaving the stage. My session had six such 15-minute slots back-to-back, and it all worked out extremely well with all speakers keeping their time! Thanks to our excellent session chair, Natraj Ekambaram from NXP in Austin, for keeping it all on track! Fun anecdote: the NXP office in Austin used to be a Freescale office, and Natraj had worked with Simics back in the days of the QorIQ P4080.
The real highlight of the DAC Designer Track was the poster session. It was like a one-hour Q&A session where people would come by and ask questions and discuss aspects of running virtual platforms in the cloud. And containers, and other things. It really made me totally rethink my approach to posters and poster sessions for the future!
I talked about cloud and containers in my presentation, and it is clear that Cloud was happening big time in EDA this year. There was an entire “Design Infrastructure Alley” section on the show floor, featuring companies such as Google (Cloud), Amazon (Web Services), and Microsoft (Azure). In addition to these mainstream big cloud companies, there were several companies selling cloud-related tools like scheduling and file management tools customized for the cloud.
Cadence also had a booth where they presented their cloud offerings, separate and in addition to their even larger main booth. The big thing seemed to be the availability of Palladium emulators as a cloud service, available for rent for as short a time as one month at a time. Selling a high-capital-investment product like Palladium as a cloud service makes a ton of sense, especially for smaller companies that would rather avoid the high up-front investment.
A core theme in all the EDA-in-the-Cloud presentations I heard was that you want to use the Cloud to quickly scale up execution resources for EDA workloads on demand, to more quickly finish jobs – going from nothing to tens of thousands of active cores to nothing again. Doing this requires more than just a classic cloud offering though – it is also necessary to make sure that data can quickly get to and from the compute nodes, and to support the particular needs of EDA software. All companies had some kind of “high performance” cloud instance types available to best run compute-intense workloads, including doing things like turning off hyperthreading on the hosts to optimize performance.
Security and confidentiality of customer IP when running workloads in the cloud seemed to be a solved problem. Contrary to my curmudgeonly opinion, it seems that this can work well enough. Often by renting entire dedicated hardware units rather than virtual machines (VMs) that run mixed-up with other companies’ VMs. Microsoft also offered the ability to rent actual physical hardware with no VM layer, using specialized hardware from Cray. Another significant constraint is the availability of licenses for the tools – several companies discussed ways to manage licenses to make optimal use of a limited resource, allocating resources to different locations or cloud setups depending on current demand and policies.
AI and ML
In addition to Cloud, most everyone was talking about Artificial Intelligence (AI) and Machine Learning (ML). There were quite a few companies on the floor pitching various ML accelerator IP blocks. Companies starting new designs of such accelerators were judged by stock analysts to be contributing significantly to the growth in EDA revenue over the next few years. There were also attempts at applying ML techniques inside of EDA flows, to produce better results or better guidance for how to actually run the tools. Verifyter, whom I wrote about in my DVCon Europe 2018 blog, was showing their ML-based code quality tools.
Funky Performance Hybrid
I had time to attend a few research talks and talks on the show floor. One really interesting one was a performance hybrid setup created by Toshiba in Japan. They mixed code running on a PC with kernels running on a physical ARM SoC.
The idea is to allow easy code development in a PC environment (compile for x86 Linux or Windows host, debug natively), without too much error in performance estimation. Running applications by compiling on a PC host is nice for speed of development, but it risks being way off in terms of performance. This is particularly annoying for issues like trained neural networks where retraining a net to meet performance problems can take a long time – thus, you want performance feedback for core parts of the application early on.
Interestingly, they have hardware rather early on, since in automotive hardware tends to have long lives. The problem is rather that they do not want to do a full port of the code to the actual target machine with its real-time operating systems, or even full development of the target application.
Their solution is to develop software on a standard PC using ROS and OpenCV, and delay porting to the target for as long as possible. To get accurate performance estimates for the important kernels, they port just these kernels to the target hardware and run them on the target hardware – but with the application still on the PC. This is achieved with a very simple hypervisor layer running on the target SoC, specialized for performance measurements. Overall, an interesting way to combine development on a PC with performance estimates from real hardware.
Breakfast Bytes Book
Last time I went to the DAC, in 2016, I got a free book from S2C that offered a really good introduction to FPGA prototyping technology. This year, I found a book in the Cadence booth, where Paul McLellan was giving away signed copies of his book “A Year of Breakfasts”, collecting the best blog posts from his Breakfast Bytes Blog from 2018. It seems that apart from Paul mentioning the book on his blog there is no way to link to it.
It was a really good read for the flight home, offering nice summaries on topics of great interest like how to compress neural networks, trends in automotive, and semiconductors in China. A long time ago, Paul and I worked together in marketing Simics at Virtutech, and it was great to meet up and talk a it about the old times too! He is doing a great job at Cadence with the blog, highly recommended reading!
No trade show is complete without give-aways, and there are two companies worth mentioning here. Mentor was giving out genuinely useful things like a cooler lunch bag and an insulated bottle made out of plastic, meaning that you can actually freeze it to take some cold with you. Worse insulation than your typical insulated metal bottle, but very useful in practice.
The most topical give-away came from Excellicon, a company that I have no real idea of what they do. But their dice cup was a really nice touch, keeping with Las Vegas gambling theme -once they explained the thinking, most people seemed to have missed the point without an explanation.