Nvidia Announces New Volta AI Performance Milestones

Nvidia Announces New Volta AI Performance Milestones

Nvidia is promoting their own high-end performance in major AI and machine learning benchmarks, as apparently some kind of floodgate has popped open on companies talking about performance metrics for their own hardware solution (This isn’t literally true, but we’ve been seeing a lot of artificial intelligence, machine learning, and similar data cross our desks of late). According to Nvidia, it’s hit some major milestones, including:

Nvidia is also talking up the use of Volta as a potential replacement for ASICs that would otherwise provide superior functionality in a limited set of use-cases or scenarios. It’s not clear — and I genuinely mean that — how such claims should be interpreted. Nvidia notes: “For instance, each Tesla V100 Tensor Core GPU delivers 125 teraflops of performance for deep learning compared to 45 teraflops by a Google TPU chip. Four TPU chips in a ‘Cloud TPU’ deliver 180 teraflops of performance; by comparison, four V100 chips deliver 500 teraflops of performance.” It also refers to a project by fast.ai to optimize image classification on the CIFAR-10 dataset using Volta that turned in best-in-class overall performance, beating all other competitors.

There are problems, however, with relying on FLOPS to measure performance. FLOPS is calculated by a simple mathematical equation:

Nvidia Announces New Volta AI Performance Milestones

In GPUs, this works out to GPU cores * clock * two instructions per clock (one multiply, one accumulate) = X rating in TFLOPS. This intrinsically assumes that the GPU is executing a multiply and an accumulate on every GPU core simultaneously. This assumption allows us to generate comparative metrics relatively quickly using a constant formula, but there’s a huge loophole: If GPU #1 typically achieves only 50 percent of its theoretical peak FLOPs, it could be outperformed by GPU #2, which might have much lower maximum theoretical FLOPs performance but still exceed the perf of GPU #1 if it’s more efficient. This also applies to any kind of comparison between two different solutions.

Nvidia Announces New Volta AI Performance Milestones

The recent reports on Google’s cloud TPU being more efficient than Volta, for example, were derived from the ResNet-50 tests. The results Nvidia is referring to use the CIFAR-10 data set. The Dawnbench team records no results for TPUs in this test, and fast.ai’s blog post on the topic may explain why this is:

Google’s TPU instances (now in beta) may also a good approach, as the results of this competition show, but be aware that the only way to use TPUs is if you accept lock-in to all of:

Google’s hardware (TPU)Google’s software (Tensorflow)Google’s cloud platform (GCP).More problematically, there is no ability to code directly for the TPU, which severely limits algorithmic creativity (which as we have seen, is the most important part of performance). Given the limited neural network and algorithm support on TPU (e.g. no support for recurrent neural nets, which are vital for many applications, including Google’s own language translation systems), this limits both what problems you can solve, and how you can solve them.

As hardware and software continue to evolve, we’ll see how these restrictions and capabilities evolve along with them. It’s absolutely clear that Volta is a heavy-hitter in the AI/ML market as a whole, with excellent performance and the flexibility to handle many different kinds of tasks. How this will change as more custom hardware comes online and next-generation solutions debut is still unclear.

Continue reading

Apple Will Let Users Deactivate Performance Throttling on iPhones

Apple has come under fire in recent weeks for its decision to slow down older iPhones with degraded batteries, but it wasn't just the actions that got Apple in hot water.

How To Boost Older GPU Performance, Since You Can’t Buy a New One

With GPU prices beyond crazy, there's no way we're recommending anybody buy right now. If you're stuck making do with an older card, here's some advice and tips for maximizing every last bit of performance it can give you.

AMD Ryzen 5 2400G Review: The Best Blend of CPU and GPU Performance We’ve Ever Seen

AMD's Ryzen 5 2400G is intended to offer a blend of CPU and GPU performance in a single package. It faces a significant fight from the Core i5-8400, at least on one side of that equation.

DDR4 Scaling and AMD Ryzen 5 2400G: Do Higher Clocks Boost Performance?

The Ryzen 5 2400G's GPU performance is excellent when paired with DDR4-3200, but how does it scale from baseline DDR4-2133? We investigate.