Epyc Achievement: AMD Now Available for Oracle Cloud Compute Instances
AMD and Oracle jointly announced today that AMD’s Epyc servers are now available as Oracle cloud compute instances. It’s a significant win for AMD, which has been slowly growing its server market share since it launched its Zen-based Epyc CPU family back in 2017. Oracle is now the third major cloud provider working with the company, alongside Microsoft and Baidu.
Oracle is announcing a number of instance options, with both bare mental and 1,2, 4, and 8-core VM shapes. The price is $0.03 per core hour, a rate Oracle claims is 66 percent lower than general purpose instances offered by “the competition,” 53 percent lower than Oracle’s other compute services, and the lowest price available from “any non-burstable compute instance in the public cloud.”
Oracle then spends a fair bit of time breaking down where AMD’s Epyc shines and where it falls behind. It matches Intel in low-level tests like SPEC_int and SPEC_fp, but offers lower cost. Its instances are said to be ideal for “Big Data analytics workloads that rely on higher core counts and are hungry for memory bandwidth. AMD has a partnership with, and is certified to run software from, leading ISVs who are a part of the Hadoop ecosystem, including Cloudera, Hortonworks, MapR, and Transwarp. On a 10-TB full TeraSort benchmark, including TeraGen, TeraSort and TeraValidate, the AMD system demonstrated a 40 percent reduction in cost per OCPU compared to the other x86 alternatives with only a very slight increase in run times.” AMD CPUs are also ideally suited to HPC workloads that require high memory bandwidth.
Oracle compared a dual socket AMD Epyc 7551 with 32 cores per socket and a 2GHz clock against the “competitor” CPU with 26 cores per socket and a 2GHz clock. The AMD system had 512GB of RAM, the competitor system had 768GB. It is not clear why the two systems were not matched in terms of total RAM capacity — this could theoretically reflect a minimum spec system configuration for Oracle for each system or it could be an attempt to improve the AMD systems performance/dollar ratio. I don’t know what it is — but I wanted to note it.
The argument being made here is that if you’re doing something besides running SpecJBB, AMD has a strong argument to make in favor of its own parts. There’s a third graph with normalized performance per dollar per core, but since we don’t have an explanation for the RAM loadout difference, I’m not going to cite it. Manufacturer benchmarks should always be treated cautiously in any case.
But one thing we can say: AMD is executing the roadmap it said it would execute with Zen. Lisa Su promised a slow product ramp and that new partners would come onboard, and that’s happening. She’s made modest forecasts targeting mid-single-digit server market share this year and as far as we know, she’s meeting them. Little by little, the pieces are falling into place.
Continue reading
Google Will Officially Support Installing Chrome OS on Your Old Computer
Google has just acquired Neverware, and its CloudReady product is becoming an official Chrome OS offering.
IBM Promises 100x Faster Quantum Computing in 2021
Intel has plans to accelerate quantum workloads by up to 100 times this year, thanks to new software tools and improved support for classical and quantum computing workloads.
Europe Plans 20,000 GPU Supercomputer to Create ‘Digital Twin’ of Earth
The plan to create a digital twin of Earth might end up delayed due to the relative lack of available GPUs, but this isn't going to be an overnight project.
Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications
Nvidia is already capitalizing on its ARM acquisition with a massively powerful new CPU-plus-GPU combination that it claims will speed up the training of large machine-learning models by a factor of 10.