Nvidia Unveils Hopper H100 Data Center GPU

Nvidia Unveils Hopper H100 Data Center GPU

Nvidia has pulled the wraps off its new Hopper GPU architecture at its AI-based GTC conference. As expected the chip is a beast, packing 80 billion transistors into a gigantic 814mm monolithic die. It features PCIe Gen 5 connectivity, and uses up to six stacks of High-Bandwidth-Memory (HBM). The technology is the official replacement for the Ampere GA100, which launched two years ago. Nvidia will be offering the H100 in a variety of products designed to accelerate AI-based enterprise workloads.

Hopper is a significant step forward for Nvidia. Despite the die being roughly the same size as the Ampere-based GA100 that preceded it, it has almost 40 percent more transistors. This is due to the company’s transition from TSMC’s 7nm node to TSMC’s 4nm process. It’s also transitioned from the GA100’s 40GB or 80GB of HBM2 to 80GB of HBM3 memory on a 5,120-bit wide memory bus.

This allows for up to 3TB/s of memory bandwidth. Nvidia claims 20 H100s linked together “can sustain the equivalent of the entire world’s internet traffic.” It’s an odd comparison for a graphics card, even a data center product.

Nvidia Unveils Hopper H100 Data Center GPU

One of the more interesting advancements with Hopper is the inclusion of DPX instructions. Nvidia claims DPX can accelerate “dynamic programming” used by a lot of algorithms across many scientific industries. This includes the Floyd-Warshall algorithm that’s used to find the best routes for autonomous fleets. It will also boost the Smith-Waterman algorithm used in sequence alignment for DNA and protein classification and folding. The company states Hopper can speed up these workflows by 40X compared to using CPUs, and 7X faster than with previous-gun GPUs.

The arrival of the H100 means the company also has new DGX workstations. Each DGX H100 system will offer eight H100 GPUs connected by Nvidia’s 4th generation NVLink technology. This new connector provides 1.5X the bandwidth of the previous generation, for 900GB/s of bandwidth. It can scale up via an external NVLink switch to connect 32 nodes for a DGX SuperPOD super computer. A single DGX 100 allows up to 32 petaflops of FP8 performance.

Nvidia’s new architecture is named after Grace Hopper, who was an early pioneer in computer programming. The release of Hopper was anticipated, as it’s been on Nvidia’s roadmaps for quite some time. Also, Nvidia typically releases a data center version of its new technology before the gaming version arrives, so this is par for the course. The replacement for Ampere for gamers is named after another female IT trailblazer named Ada Lovelace, as we’ve previously reported. Nvidia’s H100 GPU will be available in the third quarter of 2022, in both SXM and PCIe form factors. You can watch Nvidia’s GTC keynote address here.

Continue reading

MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing
MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing

Nvidia's new RTX 3070 is a fabulous GPU at a good price, and the MSI RTX 3070 Gaming X Trio shows it off well.

Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report
Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report

AMD's Smart Access Memory hasn't even shipped yet, but Nvidia claims it can duplicate the feature.

Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth

Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.

Nvidia, Google to Support Cloud Gaming on iPhone Via Web Apps
Nvidia, Google to Support Cloud Gaming on iPhone Via Web Apps

Both Nvidia and Google have announced iOS support for their respective cloud gaming platforms via progressive web applications. Apple can't block that.