DAC 2025 – All About AI

The 62nd Design Automation Conference (DAC 62) took place in San Francisco, California, USA, from June 22 to 25, 2025. It was the first time in three years that I attended the DAC (this blog is a little bit late, sorry for that). For those that do not know, the DAC is the biggest show in EDA, combining a major research conference with an industry exhibition and engineering track. This year the theme was AI (Artificial Intelligence), and not much else.

This blog post comes out almost a month after the conference. I was a bit busy after the DAC and had some other blogs that I wanted to get out first. So, the DAC summary is admittedly a tad late.

The conference has a new tag line, “the chips to systems conference”. While we have been talking about software in the context of hardware design for more than a decade (I attended the DAC in 2009 when software was brand new on the agenda), the era of chiplets and AI puts much more emphasis on the system. And that system is more than just the “system on a chip” – it includes boards, racks, interconnects, complex software stacks, power distribution architecture, and data-center-level optimization.

Conference Activities Everywhere

The DAC is a busy event, with lots going on at once. The research part has a rich program on its own, with the exhibition and engineering tracks adding even more activity. It is simply not possible to see everything interesting.

One thing that I found notable is that the DAC is big on panel discussions. I am always a bit skeptical about panels as they can get rather boring… but here the organizers are able to find the right panel participants and good moderators that can ask interesting questions and drive real discovery and discussions. Having many leaders from both the academic and industrial sides of EDA present does help.

Main Themes

The theme of this year’s DAC can be summarized as “using AI to generate and verify chiplet designs that will be used to run AI workloads”.

AI was the dominant theme. It was literally everywhere, with nothing else coming close in prominence. Some illustrative examples to show just how prevalent it was:

  • All three keynote talks were about AI (or at least Machine Learning).
  • Several new (at least to me) companies based on AI exhibited at the DAC. Most of them having an “AI” at the end of their name. For example: Bronco AI, ChipAgents AI, and Oboe AI.
  • Old companies are emphasizing their new AI offerings. Cadence, Synopsys, and Siemens EDA all have extensive families of AI-based tools on offering. IBM Watson AI and Keysight AI were exhibiting, adding “AI” to established companies.
  • The big cloud vendors are selling access to AI compute resources to be used with AI-powered EDA tools. They also pitched their general-market AI platforms as a basis for design and verification flows.

Agentic AI is definitely the term of the year. The meaning might not be entirely clear yet, but it tends to mean AI (i.e., LLM) solutions tailored to a specific task in specific workflows. By narrowing down the use case, it is possible to equip an LLM with specific tools and specific knowledge, allowing it to do a good job for a specific task. Better than if you just tried to apply a general-purpose LLM in a general way. It is also easier to design products using the agentic approach – take an EDA workflow, and build agents to tackle a specific point problem.

It was quite fitting that Waymo self-driving taxis circled around downtown San Francisco and the Moscone. That showed “AI” in action! Quite impressive, even if the cars had a very large number of extra sensors sticking out around the body, as well as big turret on the roof. Never had a reason to try a ride, but those who did said they drove very carefully and smoothly.

Chiplets was the second theme. It was a topic in talks and technical presentations, but also visible on the show floor. It just couldn’t compete with the massive volume of AI marketing.

There was a Chiplet pavilion on the second floor of the exhibition (in full, it was the “EETimes: The Future of Chiplets 2025” pavilion). Vendors like Cadence and Synopsys had a secondary booth here in addition to their main booths.

Power consumption kept coming up as a topic in various talks. There is always the classic optimization of power/performance of an IP or a chip. But at the system level, AI really brings in power as a major constraint and design factor. We are looking at gigawatt-level data centers, where single racks can go upwards of 800kW! Even if all parts are very power efficient, the sheer scale of deployment drives up the overall energy consumption to ridiculous levels.

Acquisitions. This year seems to be an unusually intense time for acquisitions. It was almost comical walking around the show floor and realizing that several exhibitors were now part of larger companies. For example, Altair and Excellicon are going to Siemens.  Ansys was in the process of being acquired by Synopsys even if it had not closed just yet (but it did in mid-July, just before this blog was published). Alphawave (exhibiting in the chiplet pavilion) was just sold to Qualcomm.

I also think we are also seeing a resurgence of small companies, I would guess thanks to AI driving investment into the EDA sector. Eventually, that will likely result in more acquisitions 😊  

Automotive at the DAC

Automotive was not a huge topic at the DAC, but it was there. Hardware and chip design for automotive is a growing market as the need for compute in cars explode. Some automotive OEMs are looking into building their own chips, not just buying from the traditional chip vendors.

Ford and Cariad are both members of Accellera, where you tended to see only the companies supplying automotive (like NXP, ST, Renesas, Qualcomm, Infineon) as members.

In the research track, I stumbled across a poster from GM Research about a budding standard for Automotive Remote Direct Memory Access (ARDMA). That such work is shown at an EDA event like the DAC is interesting in its own.

One of the analyst talks were from AutomotiveVentures (more details later).

The Siemens booth had a demo of their simulation solutions featuring a physical Ford Mustang electric car. They ran an autonomous driving system through a simulated world, projecting the simulated road onto the windshield of the car! The car wheels were also turning to reflect the steering inputs – but obviously they had disabled the actual drive.

Keynote: Michaela Blott – “Enabling the AI Revolution”

Michaela Blott is a fellow at AMD Labs in Ireland. She comes from the Xilinx/FPGA side of AMD. We had her as a keynote speaker at DVCon Europe in 2013. This talk was a bit less technical, but still interesting.

Started with a reflection on the current state of the AI. It is overwhelming how much information is coming out all the time, nobody can follow it all themselves. Michaela uses a strategy of following selected experts who keep their eye on specific subfields. There is a division between “bulls” and “bears” for AI, just like we have in stock market (bulls are positive, bears negative). Their viewpoints obviously deviate quite a bit.

AI is both new and established in EDA. The first DAC paper featuring neural networks was in 2017. Synopsys launched a tool “featuring AI” in 2020. Cadence followed in 2021. Today it is everywhere, as already noted above.

The next wave of AI impact will be in embedded applications. It started in the cloud, and then moved on to personal devices (“endpoints”) like phones, CoPilot PC, and in games. Now, it is being embedded in the physical world. This step also provides some hope that AI applications will actually see real revenue, as there should be more easily monetizable value. Chatbots are a lousy business as the switching cost is low, the competition is very aggressive, and it is virtually impossible to charge what it actually costs to run queries.

The main problem for AI today is making inference affordable to run. Amazon Web Services already claim that inference is 90% of their AI workloads today, and Google puts it at 60%. New concepts like “test time compute” with its iterative application of inference radically increase the time and thus cost to produce a result. Michaela said that linear improvements in output quality need exponentially more time.  

To make AI more efficient, you need innovations on both the software and hardware sides. Many companies are pursuing the hardware angle, in particular by building ever more customized hardware. Customization (or specialization as I would put it) is how we get efficiency in computing in general. However, too specialized/customized hardware has the problem that if something changes in the software, the hardware will be less than useful.

On the software side, Michaela cited a large number of techniques currently being used to improve the performance and efficiency of AI. For example, using mixture-of-experts (to run more but smaller models in response to a query). Using smaller models in general is also attractive to reduce the cost of execution. Optimizations like KV caching and other changes to how we implement attention blocks also reduce the cost.  

She made an interesting observation related to quantization: as we have moved from FP32 to FP4, matrix multiplication is no longer the main bottleneck in AI!

In the end, she talked about the tools (FINN, LogicNets, Brevitas, and more) that AMD have in place to implement AI efficiently on FPGAs. There are several key tools that shrink models and allow very small tight and optimized implementations to be run on FPGAs with low latencies.

Keynote: Jason Cong – “Democratizing Chip Design”

Jason Cong is a professor at UCLA who has been active in EDA for a very long time. His main background is also in FPGAs, and in particular in High-Level Synthesis. His research led to the Xilinx Vitis and Xilinx Vivado HLS tools.

The problem he is trying to solve is that chip designers are rare. As he told it, the US has 2M software designers but only at most 100k hardware engineers. At the universities, computer science is growing, while electrical engineering and computer engineering are not. Young people are interested in doing software, not hardware. Which is unfortunate as we need more chip designers.

Chips are getting bigger all the time. He brought up the example of the Nvidia B200, which has 208G transistors. In fact, chips are the most complex objects that humanity have ever created!

His goal is to empower interested software developers to create hardware. I.e., hardware designs for domain-specific computing. Software programmers that care about performance, and who need a way to overcome the end of processors getting better and performance coming for free.

Domain-specific computing is his word for what Michaela called customized computing. They are both pursuing the idea of building hardware that is tightly tailored to a certain problem.   

He ended his talk with some slides on agentic AI. His team is working on ways to apply LLMs to chip design flows, and he did have some interesting observations on what works and what does not.

He does not believe the generic LLMs can be trusted to generate RTL – you need specialization and human intelligence to understand the problem. Design agents are instead used to do specific things: such as optimizing area and optimizing performance and critiqing the generated designs. In the end, he wants to combine humans and supporting agents.

Accellera Panel on AI

Accellera organized a luncheon at the DAC. It combined a networking opportunity with a panel on AI in chip design, plus a short update on Accellera itself.

Panel participants, all of them experienced in the application of AI to chip design:

  • Daniel Nenni of SemiWiki (https://semiwiki.com/), moderator.
  • Chuck Alpert, Cadence
  • Erik Berg, Microsoft
  • Monika Farkash, AMD
  • Harry Foster, Siemens
  • Badri Gopalan, Synopsys
  • Syed Suhaib, Nvidia

“AI” is really nothing new in EDA, but it has changed. 40 years ago it was implemented as rules in synthesis tools. 20 years ago machine learning was in active use. The big thing now is the explosion of chatbots and LLMs.

Workflow creation and automation came up several times, as a common use case for LLMs in EDA. AI-powered assistants is currently the best use model. Thinking in terms of workflows is a good match for Agentic AI, as agents can be defined to assist or perform particular functions in the flow. Many panelists mention that AI is perfect to write scripts or replace scripting (scripting is a huge part of EDA workflows).

When setting up new AI projects, it is important to start from the “what” and define the workflow. It makes no sense to just say “we will use an LLM”. You need to think through the big picture and decide on where LLMs/AI/ML can be usefully applied. 

When asked about the productivity gains that AI might bring, the panel participants hovered around 50%. Companies on the floor claimed up to 10x – but I assume this comes from looking at individual tasks vs a complete workflow. Accelerating a single task significantly might not have a big global impact if the task is not a large proportion of the overall workflow. Amdahl’s law, applied to AI agents.

A challenge to teach LLMs about EDA tools and languages is the lack of generally available training data (this has been noted several times before). Public Verilog code is 1% of the size of Python code. Training models on company-internal data and then sharing the models is not very realistic. Still, the industry needs to find a way to collaborate on the creation of basic models for EDA (such collaboration on non-differentiating infrastructure is common in other industries like the AUTOSAR consortium or the Linux kernel).

Companies need ways to use their internal data and code without risking it being exposed. A “basic” RAG setup is often all you need to make use of your data. Fine-tuned LLMs are much more expensive and often not necessary. Still, fine-tuning models on internal data can be really powerful, and is reported to yield better results than the general models from various tool providers. That said, Nvidia apparently does not apply fine tuning. It seems the jury is still out on fine-tuning vs RAG and similar document processing.

Productivity does not necessarily mean shorter time to market. Instead, teams might use the time that gets freed up to spend more time on optimization, especially for PPA, Power-Performance.  

One of the best observations of the day: AI is good at generation. But that is not in itself very useful. The key is to validate and verify the correctness of the LLM output. If the output has to be validated by a human, there is not much gain, just moving the cost to a different location (either a human writes correct code or text from the start, or the human checks the code or text from the LLM, likely about the same effort).  Thus, for AI to provide real gains in productivity, you need automatic validation.

Analyst Talk: Steve Greenfield, AutomotiveVentures

The DAC has a tradition of analyst talks on stage in the exhibition hall. This one was about automotive, by Steve Greenfield, from AutomotiveVentures. AutomotiveVentures is an investment company looking to “invest in mobility to solve many of humanities’ problems”. Steve covered a lot of interesting points on where automotive and mobility in general are going and the state of technology.

Overall, the big trend is towards fully BEV. Right now there are headwinds, especially in the US, but that is considered temporary. The key to success is better battery tech: it is necessary to charge faster and hold more charge, in order to make charging a non-issue and as easy as current gas cars.

China:

  • The BIG problem for western automotive today.
  • China has locked up the battery supply chain for EV.
  • In the past 20 years, China has gone from 0 to 39% of all new cars produced! They are over-producing vehicles compared to their domestic market. As a result, they are trying to export. The competition is fierce, and only two Chinese vendors are profitable!  Exported cars are making more money than the cut-throat local market even with tariffs.
  • China has no legacy and has managed to build a cheaper and faster way to design and build cars. 
  • They can get new cars from concept to market in 9 months, compared to 4 years for legacy markets.  Gives them more learning cycles to refine the value prop and driving out costs.

Repairs and service. This was really interesting, and somewhat depressing for the longevity of vehicles:

  • After end of warranty, US consumers tend to move to independent repair shops. This will change in the future, as connected vehicles will try to book service with their own dealers.
  • Cost of vehicle repairs has gone up by 50% since start of Covid, compared to 20% in general.
  • Tesla “megacasting” has a huge impact on repairability. It reduces the cost of manufacturing, but it would seem to make repairs more expensive. Single large integrated components are hard to replace and expensive to buy new. It is not an unalloyed good, if you consider the lifecycle and long-term support of the vehicle.
  • Car insurance is going up – since cars are becoming disposable due to increases in repair costs and repair difficulties. This is good for consumers, nor for the environment.
  • Steve also mentioned that John Deere is making it very hard to repair tractors, and that those moves are not popular.

 Autonomous driving:

  • Industry is a very promising early application area. Things like autonomous mining machines, agriculture tractors. Largely forgiving environments. An autonomous system can work 24/7 and takes out the cost of labor.
  • Morgan Stanley claimed in 2012 that everything would be autonomous by 2022. Did not happen. Goldman Sachs, now in 2025: claims autonomy will be majority of cars in 2040, going as high as 80% in China.
  • He believes that autonomous vehicles will result in the elimination of accidents and deaths in US traffic. Count me skeptical. But autonomous vehicles will definitely have societal impact as traffic changes fundamentally.
  • He recommended that the audience try a Waymo to see just how far they have come. Waymo is planning to start testing in interesting weather like Boston. Once again, I am still quite skeptical on how well this will work in real winters and in particular in Europe where we build for pedestrians and bikers first and cars second.

Analyst Talk: Dylan Patel, SemiAnalysis

Dylan Patel, from SemiAnalysis gave a very high-tempo talk covering all kinds of trends in EDA and chip design, from lithography up to packaging and datacenters and racks.

On Lithography, he said that the high costs of high-NA (Numeric Aperture) EUV Lithography is making TSMC and Intel back off and using a solution involving larger masks instead. Overall, economics of EUV are challenging.

Leading edge fabbing is being led by HPC and AI, no longer by mobile chips.

TSMC is doing a lot in packaging.

  • System-on-Wafer technology, as used by Cerebras and Tesla, is being extended to allow stacking other dies on top of the wafer (if I got it right).
  • They are offering bigger base dies for packages, moving to 100x100mm or bigger (which is absolutely huge and interesting manufacturing challenge in itself)
  • Stacking and 3D is marching on.
  • COUPE is a technology to stack optical components on top of regular chips, to allow for shorter connections to optical tranceivers from compute dies. 

Interconnects matter! The compute needed for AI has put new emphasis on the importance of chip-to-chip interconnects.

  • There is a real limit in the “beachfront” property on a chiplet for chiplet-to-chiplet interconnects.
  • “Pins” are limiting – should they be spent on memory interfaces or other IO? It might make sense to put memory interfaces on chiplets that “stick out” from the main die to get more edges to put critical pins on.
  • Memory connections are starting to devolve from standardization – each major chip design might end up with a unique way to attach DRAMs and HBM memory. Memory chip producers will have to make several times more variants.

 Optical interconnects:

  • Offer huge bandwidth over quite long distances, but are far less reliable than copper connections and also age over time.
  • Integrated optical transceivers might make chips age out of usage at a much higher rate, as optics age (unlike copper).
  • The lower reliability of the components introduce significant system reliability challenges: “MTTF of 5 years on a single transceiver, on a 500k unit cluster, means a failure every 5 minutes”. There will be even more transceivers in future AI clusters. Broken links mean a broken computation, need fault-tolerance in the clusters. And we need optical transceivers with much better reliability. 
  • Current optical transceivers can use up to 30W each and are getting liquid-cooled to maintain stability (the power consumption comes from their integrated DSPs).
  • If you can use copper, use copper! Nvidia has a 5000-cable copper backend in their latest racks. The NVLink KYBER backplane features 42 layers of PCB, far beyond what anyone else is doing.

Power distribution in data centers is changing.

  • From 12V to 48V feeds to boards.
  • Nvidia is pushing for 800V DC feeds, along with other industry players. This makes for some interesting workplace safety challenges.
  • AI workloads are a real challenge for power distribution, as they fluctuate quickly. A compute-intense phase draws 10x more power than a weights-exchange phase, and that taxes power supplies and have been seen to cause actual transformers to explode. Solutions involve doing dummy calculations to keep the power flowing (Meta is using this technique) or using batteries as buffers.

The EDA talent pool has remained essentially flat for the past decade, at least in the West. As a result, we need more productivity, and AI is the obvious solution being pushed right now. Classic EDA vendors have had AI in their tools for years, and many new startups are doing AI for chip design (as already noted). He pointed out that he sees AI as a tool to better explore very large design spaces. Not sure exactly how that is supposed to work.

Engineering Track

While I did not qualify for a presentation in the engineering track, I did manage to get a poster accepted and I was selected for a “Poster Gladiator” session. The Poster Gladiator is a fun format where you get to present your poster contents in five minutes on a stage out in the exhibition, followed by two minutes of questions from a panel of senior judged (previous DAC general chairs).

Doing a five minute presentation is a fun presentation challenge, and I carefully adjusted the timing as I went along so that I ended the very second the clock ticked to zero. Mike drop. The chair of the session said “that was astonishing”. Not enough to win the poster gladiator contest though, but I still see it as a fun exercise in presentation technique.

Oh, and the topic of the poster was our new VLAB 3 fast virtual platform kernel – in C++ for maximum speed and minimal overhead, and with parallelism built in from the ground. We have measured it run several times faster than our old VLAB 2 simulator, and the parallel scaling is good (assuming workloads that have parallelism in them, Amdahl’s law applies just as much to virtualized workloads as to real workloads).

Finally, Sweets!

The booths at the DAC exhibition are full of giveaways, including quite a few variants of sweets. Here are some of the wittiest and best from this year:

Bronco AI is doing agentic AI for EDA and had some exquisitely funny wafers.

Chip 1 ordered some custom gummibears from Haribo!

Breker had home-made caramels in their booth, made by the wife of one of the sales people attending!

Badges

The DAC this year coincided with the announcement that we were acquired by Cadence. This had the funny effect that I ended up with two registrations. An Engineering Track conference attendee under “ASTC”, and a “Cadence” registration for the exhibition. I had hoped to be able to run around with two badges for laughs, but that was not allowed. Instead, the registration desk managed to merge my two registrations into a single Cadence-branded registration. So, the DAC was my first conference under the Cadence brand.

San Francisco

Downtown San Francisco is still suffering from the post-Covid downturn. There were more closed shops that I remember from 2022, but the streets felt cleaner and better kept. The city is obviously trying to get back on its feet. There are still tourists around, but I guess the big problem is that fewer people are working in offices downtown and thus reducing the consumption power.

It was notable that the Ukraine war was nowhere to be seen in the cityscape. That is very unlike Europe where you find Ukrainian flags flying on official buildings in almost all cities. There was an inundation of rainbow flags however, which does fit the image of San Francisco.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.