Several years ago, Intel unveiled Loihi, its first public neuromorphic research processor. The term “neuromorphic” is essentially a catch-all for any type of processor that attempts to mimic the function of the brain. Because the brain is a complex organism, different chips can be neuromorphic in different ways, depending on which aspect of the brain’s design they attempt to copy.
Now, Intel has announced Loihi 2, built on the upcoming Intel 4 process node. Intel 4 is still in pre-production and Intel isn’t announcing any kind of surprise launch cadence. Presumably, Loihi 2 is a very early pipe cleaner for the node.
The original Loihi is a 14nm many-core chip with an asynchronous spiking neural network. Loihi 2: The Spikening follows that model — its cores communicate with each other through irregular bursts of activity. There is no off-die RAM or cache in the traditional sense; instead, each neuron has its own pool of memory. As with a living neuron, signals must exceed a certain threshold of relevance before a core will pass them on. There’s a series of x86 CPU cores riding herd over the entire affair and they periodically forces the neuron to synchronize, or recalculate the strength of its connection to other neurons.
Much of Loihi 2’s design is based directly on the OG version, but there are some upgrades. Moving from 14nm to Intel 4 allowed Intel to dramatically increase the maximum number of neurons per chip, from 130K to 1 million. There are six synchronization cores, up from three, and Loihi 2 uses a three-dimensional mesh network instead of a 2D mesh for communication within the chip.
Loihi 2 also changed its approach to “firing.” Originally, it encoded a single bit of data when it fired: just a one or a zero. Now, spikes are integers, so they can carry more data — not unlike biological neurons, which are state-aware. What’s more, that increase in information bandwidth means that a spike can actually exert some influence on the recipient neuron.
The big goal of Loihi 2 is to capitalize on Loihi’s phenomenal energy efficiency. Typically, we’ve handled unwieldy problems in computing by throwing more muscle at them. Neural networks, in particular, are profoundly redundant, and it means they’re profoundly power-hungry. But there’s a logistical ceiling on how much data you can crunch for how much energy invested, and it’s closer than we like. Intel claims that in some cases, it’s able to reduce power consumption by a hundred-fold or more compared to conventional computing. At the same time, Loihi 2 is said to be twice as fast as the original Loihi when updating a neuron’s state. Overall performance is said to be up to ten times faster, with a substantial increase in programming flexibility throughout the chip.
So, what you can actually get done with a chip like this? The short answer is that researchers can wrangle big datasets, looking for trends that only show up from high overhead or when you look very close. The brain excels at pattern recognition, and it’s a good bet that Loihi will similarly be used in the analysis and control of decentralized/edge computing and monitoring systems. It excels at finding optimal solutions within multiple constraints. Gradient descent is not the all-singing, all-dancing winner of artificial intelligence, but it is a weapon Loihi can wield to great effect.
Research efforts like Loihi aren’t going to show up in personal PCs any time soon. They may offer phenomenal power efficiency, but they don’t currently offer it in a lot of workloads that are very practical or relevant to modern computing. That’s alright. In keeping with the Hawaiian theme, Intel also announced Lava, an open source framework that devs and researchers can use to build “neuro-inspired” applications. They’re also offering two Loihi 2-based neuromorphic systems, which they’re calling Kapoho Point and Oheo Gulch, to members of the Intel Neuromorphic Research Community (INRC) — and they’ve made Lava available through GitHub, gratis.
Research projects like Loihi are aimed at finding new methods of computing as transformational relative to the present day as the invention of the transistor was to the vacuum tube. It shares this goal with efforts like Meso, or quantum computing. Projects like these may never come to desktops or laptops, but they represent Intel’s hunt for new frontiers in computing with underlying properties we can optimize that aren’t quite so played out as Moore’s law.
Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption
Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load.
Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.
Ryzen 9 5950X and 5900X Review: AMD Unleashes Zen 3 Against Intel’s Last Performance Bastions
AMD continues its onslaught on what was once Intel's undisputed turf.
Leaked Benchmarks Paint Conflicting Picture of Intel’s Rocket Lake
Rumors about Rocket Lake have pointed in two opposite directions recently, but the more competitive figures are more likely to be true.