Ever since Intel announced its 3D Xpoint memory (branded as Optane), the company has claimed that the new class of memory would represent a fundamental leap forward for the entire industry. Proof of these claims has been relatively slow to appear — it’s difficult to replace existing memory technologies, and the existing memory stack (spinning hard drives, NAND, and DRAM) covers a wide range of price points, power consumptions, reliability, and capacity.
Intel is launching a new class of Optane it calls Optane DC Persistent Memory, with capabilities that it believes will bridge the gap between DRAM and non-volatile storage while expanding the amount of memory available per CPU socket to as much as 3TB.
Optane DC Persistent Memory is pin-compatible with DDR4, according to Anandtech, and will be offered in packages of up to 512GB per stick (six RAM slots = 3TB of addressable RAM per socket). Systems will be capable of fielding large Optane caches alongside smaller DRAM pools; one of Intel’s demos showcased a Cassandra database running on 256GB of DDR4 RAM + 1TB Optane DC PM, as opposed to 1TB of DRAM. Intel’s major focus with Optane DC PM is on storage performance consistency. Using DRAM + NVMe-connected storage can be limiting in certain scenarios, with performance bottlenecked by storage write-backs. An Optane cache avoids this problem.
Intel is also working on a capability it calls Persistent Memory over Fabric (PMoF), a low-latency data replication method with direct load/store access that can maintain performance of 287,000 ops/second, compared with initial performance of 3,164 ops/second for a conventional DRAM + storage system. (Eventual consistency for the DRAM + storage system, Intel notes, is comparable to the Optane rig). But Optane’s low latencies are also useful in other contexts, like when restarting a database. Intel reports a restart time of 2,100 seconds for a conventional configuration compared with 17 seconds for Optane.
Much of the work to accomplish this has been done on the software side. Optimizations and file system abstractions are necessary for putting the “Persistent Memory” in the Optane DC drives that Intel is launching. To help enterprises tune their databases and software for Optane DC PM, Intel has built a new Persistent Memory Development Kit (PMDK) with a collection of libraries, APIs, and other software tools. PM will be supported on both Windows and Linux and Intel has added support for the capability in its performance analysis software kit, Vtune.
Details on the SDK and how it interacts with software are a little scarce at the moment. But the implication here is that the heavy lifting to make Persistent Memory work is handled outside of the applications themselves. In other words, if you’re running Application X, X doesn’t necessarily have to be optimized by the original developer to take advantage of Persistent Memory; that’s the job of the PMDK to handle. The implication here is that app developer optimizations are useful and productive, but aren’t a literal requirement — and that’s an important feature when talking about a capability with such potential to reshape memory hierarchies.
Other tidbits from the Q&A: These new capabilities will be tied to new, upcoming CPUs from Intel, with revenue shipments to select customers in 2018 and broad availability in 2019. Intel also expects to ship QLC NAND drives, like Micron, in the back half of the year and into 2019. Intel isn’t disclosing figures on DIMM Optane endurance, power consumption, or pricing at the moment, but DDR4-standard clock speeds are expected for Optane DIMMs.
Overall, this shift towards Optane, and an emphasis on fabric performance, are a critical component of Intel’s own transformation from a personal CPU-focused company to a firm with stronger ties to data center and cloud processing. Intel clearly sees new storage technology as key to its long-term addressable market spaces, and it’s targeting its initiatives at these areas, possibly in a bid to tie companies more closely to its CPU products. After all, if Optane provides significant performance advantages and Works Best With Intel (or only with Intel), then Intel has a neat method for keeping itself even more relevant in the data center market.
Then again, this may also reflect the need to optimize other facets of computing beyond pure CPU performance, possibly as part of pursuing exascale computing, where DRAM power consumption is a major limiting factor. With Qualcomm looking to exit the server business, Intel isn’t facing major heat from ARM in data centers, and AMD’s Epyc ramp, while a competitive threat long-term, isn’t expected to shatter Intel’s server dominance (AMD hopes to take 4-6 percent of the server market this year).
AMD Will Bring Smart Access Memory Support to Intel, Nvidia Hardware
AMD is reportedly working with Nvidia and Intel to bring hardware support for Smart Access Memory to other GPU and CPU platforms.
Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report
AMD's Smart Access Memory hasn't even shipped yet, but Nvidia claims it can duplicate the feature.
Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.
Tesla Ordered to Recall 150K+ Vehicles to Repair Memory Failures
Tesla has been asked — or "asked" — to recall some 159,000 vehicles to repair a NAND memory issue that will eventually cause failures on every affected vehicle.