Intel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’
Intel and the Department of Energy have announced plans to deploy the first supercomputer with a sustained performance of one exaflop by 2021. That’s a bit of a slip compared to previous milestones — in fact, that 2021 delivery date means Horst Simon should win the bet he made in 2013 that supercomputers wouldn’t hit exascale performance until after 2020.
“Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO. “The convergence of AI and high-performance computing is an enormous opportunity to address some of the world’s biggest challenges and an important catalyst for economic opportunity.”
Aurora will deploy a future Intel Xeon CPU, Intel’s Optane DC Persistent Memory (we’ve covered how DC PM could change the high performance computing market before), Intel’s Xe compute architecture, and Intel’s One API software. In short, this is an Intel win, top-to-bottom, back to front. That’s actually fairly surprising — in the last few years, we’ve seen a lot more companies opting for hybrid Intel-Nvidia deployments rather than going all-in with just Intel. Taking a bet on Intel Xe before consumer or HPC hardware is even in-market implies Intel showed off some impressive expected performance figures. A video about the project is available below:
Aurora is intended to push the envelope in a number of fields, including simulation-based computational science, machine learning, cosmological simulation, and other emerging fields. The system will be built in partnership with Cray, using that company’s next-generation supercomputing hardware platform, codenamed Shasta, and its high-performance scalable interconnect, codenamed Slingshot.
Exascale has more than symbolic importance. The level of compute capability in the human brain at the neural level has been estimated to be in the ballpark of exascale computing, though I can’t say strongly enough that such estimates are an incredible simplification of the differences between how the human brain performs computations versus how computers do. And simply hitting exascale (that’s 1018 FLOPS) doesn’t actually help us use those transistors to build a working model of a human brain. Having the theoretical computational capability of a brain doesn’t actually equal a working whole-brain computer model, any more than having a huge heap of concrete, steel, and enriched uranium is equivalent to a functional nuclear reactor. It’s how you put the thing together that dictates its function, and we’re a long way from that.
But hitting exascale computing levels is one critical component to how we get to the point of running those simulations and running them at scale. This is not to downplay the difficulty of simulating a human brain — open source projects like OpenWorm are working on a worm, C. elegans, with only a thousand cells in its entire body. Booting up a digital human consciousness is quite some ways away. But with exascale computers, we’re moving into new frontiers of complexity — and new discoveries undoubtedly await.
Continue reading
Europe Plans 20,000 GPU Supercomputer to Create ‘Digital Twin’ of Earth
The plan to create a digital twin of Earth might end up delayed due to the relative lack of available GPUs, but this isn't going to be an overnight project.
Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications
Nvidia is already capitalizing on its ARM acquisition with a massively powerful new CPU-plus-GPU combination that it claims will speed up the training of large machine-learning models by a factor of 10.
Tesla Built a Supercomputer to Develop Camera-Only Self-Driving Tech
Tesla is talking about what it sees as the next leap in autonomous driving that could do away with lidar and radar, leaving self-driving cars to get around with regular optical cameras only.
Meta is Building a Massive New Supercomputer
It'll be used for real-time speech recognition, neuro-linguistic programming . . . and the metaverse, obviously.