IBM Unveils 19TB SSD With Everspin MRAM Data Cache

Niche technologies like MRAM (magnetoresistive random access memory) have lived on the fringes of the mainstream memory business for decades. They tend to offer fringe benefits that make them attractive for certain, specific applications but are typically stuck with baggage that prevents them from addressing a wider space. A new announcement from IBM suggests MRAM could be moving into larger markets with more customers, courtesy of a new agreement with Everspin.
Everspin is the major (though not the only) manufacturer of MRAM, with its 256Mb chips built on a 40nm process node already available and 1Gb chips sampling by the end of the year. That’s a fraction of the capacity that NAND flash offers, but it’s a huge increase for the nascent memory standard. According to Anandtech, a shift to GlobalFoundries and that firm’s 22FDX (FD-SOI) process node is responsible for the 4x capacity improvement.
IBM’s FlashSystem has previously used custom drive enclosures and supercapacitors to back up a DRAM write cache. In the event of a power outage, the supercapacitors keep the system powered up for long enough to finish writing data in DRAM back to the NAND cache. IBM’s new system moves to 2.5-inch U.2 drives, but that meant finding a new method to provide system level power protection. That’s where MRAM comes in. Because MRAM is non-volatile, it won’t lose data in the event of a power loss. This allowed IBM to dump its supercapacitors and simplify its design.



This doesn’t represent a sea change in MRAM adoption — IBM’s FlashSystem is still an exceedingly high-end play — but it’s a meaningful achievement for MRAM nonetheless. After years of playing on the margins, winning space in a major storage vendor’s hardware is an accomplishment. Other innovations, like Spin Transfer Technologies’ claims about its own technological breakthrough earlier this year, suggest that MRAM might start to compete more effectively with solutions like Optane or NAND. In STT’s case, however, the focus has been on competing against SRAM, not DRAM, while IBM’s decision to use MRAM for write caches as a means of avoiding super caps suggest that it’s still fairly difficult for MRAM to compete with DRAM in more mainstream consumer use-cases.
Continue reading

How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips
Ever been curious how L1 and L2 cache work? We're glad you asked. Here, we deep dive into the structure and nature of one of computing's most fundamental designs and innovations.

L2 vs. L3 cache: What’s the Difference?
What's the difference between L3 and other types of cache, and how does it impact system performance?

AMD Demos 3D Stacked Ryzen 9 5900X: 192MB of L3 Cache at 2TB/s
AMD had a surprise at Computex: CPUs with a lot of L3 cache, and a claimed generational-equivalent performance uplift.

AMD’s Been Planning Its V-Cache Ryzen Chips for Quite Some Time
It may appear AMD's V-Cache was an example of how the company was responding to Apple's M1, but that's not the case, as AMD's original design illustrates.