Intel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’

Intel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’

Intel and the Department of Energy have announced plans to deploy the first supercomputer with a sustained performance of one exaflop by 2021. That’s a bit of a slip compared to previous milestones — in fact, that 2021 delivery date means Horst Simon should win the bet he made in 2013 that supercomputers wouldn’t hit exascale performance until after 2020.

“Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO. “The convergence of AI and high-performance computing is an enormous opportunity to address some of the world’s biggest challenges and an important catalyst for economic opportunity.”

Aurora will deploy a future Intel Xeon CPU, Intel’s Optane DC Persistent Memory (we’ve covered how DC PM could change the high performance computing market before), Intel’s Xe compute architecture, and Intel’s One API software. In short, this is an Intel win, top-to-bottom, back to front. That’s actually fairly surprising — in the last few years, we’ve seen a lot more companies opting for hybrid Intel-Nvidia deployments rather than going all-in with just Intel. Taking a bet on Intel Xe before consumer or HPC hardware is even in-market implies Intel showed off some impressive expected performance figures. A video about the project is available below:

Aurora is intended to push the envelope in a number of fields, including simulation-based computational science, machine learning, cosmological simulation, and other emerging fields. The system will be built in partnership with Cray, using that company’s next-generation supercomputing hardware platform, codenamed Shasta, and its high-performance scalable interconnect, codenamed Slingshot.

Exascale has more than symbolic importance. The level of compute capability in the human brain at the neural level has been estimated to be in the ballpark of exascale computing, though I can’t say strongly enough that such estimates are an incredible simplification of the differences between how the human brain performs computations versus how computers do. And simply hitting exascale (that’s 1018 FLOPS) doesn’t actually help us use those transistors to build a working model of a human brain. Having the theoretical computational capability of a brain doesn’t actually equal a working whole-brain computer model, any more than having a huge heap of concrete, steel, and enriched uranium is equivalent to a functional nuclear reactor. It’s how you put the thing together that dictates its function, and we’re a long way from that.

But hitting exascale computing levels is one critical component to how we get to the point of running those simulations and running them at scale. This is not to downplay the difficulty of simulating a human brain — open source projects like OpenWorm are working on a worm, C. elegans, with only a thousand cells in its entire body. Booting up a digital human consciousness is quite some ways away. But with exascale computers, we’re moving into new frontiers of complexity — and new discoveries undoubtedly await.

Continue reading

AMD-Powered Supercomputer is The First to Break The Exascale Barrier
AMD-Powered Supercomputer is The First to Break The Exascale Barrier

Insert obligatory "Can it run Crysis?" joke here.

World’s Largest ARM-Based Supercomputer Launched, as Exascale Heats Up
World’s Largest ARM-Based Supercomputer Launched, as Exascale Heats Up

HPE is launching the largest supercomputer ever built using ARM CPUs as a testbed for nuclear physics processing at Sandia National Laboratories.

Japan Tests Silicon for Exascale Computing in 2021
Japan Tests Silicon for Exascale Computing in 2021

Fujitsu and Riken are prepping a new leap ahead for exascale computing in Japan — and the system should be ready by 2021.