In using AI tools to design chips, Samsung is the latest company tossing its hat into the growing arena. The company has apparently harnessed new AI processing tools from the Electronic Design Automation (EDA) manufacturer Synopsys to build an upcoming Exynos mobile SoC. Synopsys is a believable partner for this kind of work; decades of experience building chip design tools have undoubtedly given the company a rich data set for training models.
Artificial intelligence has been over-hyped in many areas relative to what AI and ML networks have actually accomplished to date. In some areas, like self-driving cars and medicine, progress has been slower than anticipated. There’s reason to be optimistic about the long-term potential for AI to improve chip designs, but little information on when real benefits will materialize.
One difference between applying AI/ML techniques to semiconductor research versus, say, self-driving cars, is that silicon design firms like Intel, AMD, and Nvidia have been working towards a version of this goal to one degree or another for quite some time. In the early days of the semiconductor revolution, chip designs were laid out entirely by hand by teams of engineers.
This chart of semiconductor densities through 2020 illustrates why that approach had to change. With great density comes great responsibility an increased reliance on automated tools. If it takes — and I’m purely spitballing here — a piece of paper 5 feet on a side to show the dimensions of a 10,000-transistor design at a scale useful to humans, imagine trying to scale up to a chip design with 50 million transistors. Intel would need to lay the chip out on microfiche in order to fit the CPU floorplan into a warehouse.
The field of Electronic Design Automation began in the early 1980s as an effort to simplify circuit design. Today, the field has a great many subdisciplines, including layouts, logic synthesis, and high-level synthesis. From simulating transistor behavior to logic to various types of analysis and error checking, EDA tools are woven into every aspect of modern chip design.
This does not mean Intel and AMD have stopped hand-tuning critical paths in their microprocessors. This thread by Kursad Albayraktaroglu, one of Intel’s microprocessor design engineers, speaks to the balance between the usefulness and limitations of modern EDA tools:
In almost any large SoC design, there are portions that are traditionally hand-drawn for optimum performance, or in some cases merely to accommodate the quirks of the manufacturing process. The reason is not that the synthesis tools are not good enough – they certainly can do a decent job, but the design teams would like to squeeze every possible ounce of performance from the architecture by designing these paths manually.
Companies looking to build yearly SoC refreshes may make more extensive use of automated layouts than companies working on architectures they expect to be in-market for several years. Kursad also notes that chips like Bobcat and Jaguar made heavier use of automated tools than CPUs like Bulldozer did.
AI-Infused EDA Could Unlock New Methods of Improving Microprocessor Performance
We’ve seen some hints that AI tools can boost performance compared with human silicon designs. Earlier this year, Google released a paper detailing how AI was used to improve the physical layout of an Ariane RISC-V CPU. According to Google’s work, it took an AI just six hours to create a floorplan that was both superior to anything a human built and significantly different from a typical human design.
According to a recent report from Wired, Samsung is the latest company to adopt these techniques. Kursad’s comments illustrate why a company like Samsung might be interested in adopting AI for chip design right now, and why it may be a little while before Intel and AMD announce anything similar.
Companies like Samsung and other mobile SoC vendors are believed to be more reliant on automated tools and placement now, which means any improvement in tool performance will translate directly to gains for these parts. Apple is an exception; the Cupertino manufacturer bought itself a CPU design team many years ago when it acquired PA Semi. This is not to imply that Samsung’s Exynos processors are not complex, but Samsung has no plans to compete against x86 CPUs from AMD and Intel the way Apple does.
Companies working on the highest-performing CPUs currently built today will take their time evaluating the potential for AI tools to improve power consumption, performance, or reliability. It will also take time for researchers to develop better, more efficient models, and for scientists to determine what kind of training sets are most effective. Any company using AI for a chip design will almost certainly wait to announce it until they’re absolutely certain the end result will be an improvement over what they’ve achieved with more traditional tools and workflows.
Before AI tools can build better silicon, researchers and engineers will need to verify that the various machine learning models understand the complexities of the systems they’ll be designing. This is not a trivial undertaking. The more deeply a company wants to involve AI in the design process, the more capable and multifaceted its machine learning networks will need to be.
There are two broad uses for machine learning and AI in this process: It can replace and augment existing heuristics for chip design and make suggestions to optimize the design, or it could be deployed in reinforcement learning scenarios, where the tool would “learn” how various inputs changed the behavior of a tool, with a long-term goal of automating the process.
“The goal of using ML within an EDA flow is not about having the ability to produce a better result than your most experienced engineering guru with unlimited time,” Dave Pursley, business development director in the Digital & Signoff Group told SemiEngineering. “Instead, it is to help your engineering team meet and exceed aggressive power, performance and area (PPA) goals under the constraint of an aggressive schedule. The goal is to make engineers more productive by raising the level of abstraction.”
The potential here is real. AI systems have had some genuine successes; an AI discovered a new antibiotic last year by searching a pool of over 100 million molecules. The adoption of AI tools may be one of the methods semiconductor companies use to continue boosting transistor performance over the next decade. As the benefits of new lithography nodes drop, we’re seeing an array of tools deployed to fill the void, from larger L3 caches to new packaging methods and now AI, at least a bit. Don’t expect near-term miracles, but don’t be surprised if we see future improvements in various aspects of CPU and GPU design credibly credited to AI improvements, either.
Introducing TRACBench, a New AI-Powered Transcoding Benchmark
Three of the fastest workstation CPUs on Earth slug it out in an AI-powered transcoding benchmark we designed to scale from dual-core to 64-core systems.
Google Hired Photographers To Help Train the AI-Powered ‘Clips’ Camera
Google unveiled this device in October 2017, but it's still not available. It's expected to go on sale soon, and now Google has detailed how it tuned the camera's neural network to understand what is a significant moment.
MIT Creates AI-Powered Psychopath Called ‘Norman’
Artificial intelligence researchers have thus far attempted to make well-rounded algorithms that can be helpful to humanity. However, a team from MIT has undertaken a project to do the exact opposite.
Hands On With Nvidia’s JetBot AI-Powered DIY Robot
There are plenty of programmable kit robots on the market already, but what sets Nvidia's JetBot — based on the Jetson Nano — apart is that it can be a completely standalone system, used for development, training, and running neural networks or other sophisticated algorithms.