Intel’s Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores

Intel’s Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores

Neuromorphic computing is a subset of computing that attempts to mimic the brain’s architecture using modern technological analogues. Instead of implementing a typical CPU clock, for example. Loihi is based on a spiking neural network architecture. The basic Loihi processor contains 128 neuromorphic cores, three Lakefield (Intel Quark) CPU cores, and an off-chip communication network. In theory, Loihi can scale all the way up to 4,096 on-chip cores and 16,384 chips, though Intel has said it has no plans to commercialize a design this large.

A close-up shot of an Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)
A close-up shot of an Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

“With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware,” said Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo. “Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.”

One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Poihoiki Beach, annuounced in July 2019, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)
One of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips, shown here interfaced to an Intel Arria 10 FPGA development kit. Intel’s latest neuromorphic system, Poihoiki Beach, annuounced in July 2019, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

The Pohoiki Beach implementation is not the largest planned deployment for the neuromorphic chip. Intel states that it intends to roll out an even larger design, codenamed Pohoiki Springs, which will deliver “an unprecedented level of performance and efficiency for scaled-up neuromorphic workloads.”

We’ve covered the advances and research in neuromorphic computing for several years at ET. The work being done on these CPUs is closely related to the work that’s being conducted in AI and machine intelligence overall, but neuromorphic computing isn’t just concerned with how to run AI / ML workloads efficiently on existing chips. The ultimate goal is to build processors that more closely resemble the human brain.

One of the oddities of computing is that analogies comparing human brain function and how computers work are so prevalent. Human brains and classic computers have very little overlap in terms of how they function. Transistors are not equivalent to neurons and the spiking neural network model that Loihi uses for transmitting information across its own processor cores is intended to be closer to the biological processes humans and other animals use than traditional silicon.

Projects like this have a number of long-term research goals, of course, but one of the most fundamental is to better understand how brains work in order to copy some of their energy efficiency. The human brain runs on roughly 20W. Exascale supercomputing, which is considered the minimum for advanced neural simulation of anything more complex than an earthworm, is expected to consume megawatts of power per supercomputer. The gap between those figures explains why we’re so interested in the long-term energy efficiency and computation potential of the brain in the first place. Architectures like Loihi aren’t just an effort to write programs that mimic what humans can do; the goal is to copy aspects of our neurology as well. It makes their progress a bit more interesting.

Feature Image Credit: Tim Herman/Intel Corporation

Continue reading

Asus Announces Chromebox 4 With Support for 10th Gen Core Processors
Asus Announces Chromebox 4 With Support for 10th Gen Core Processors

Chromebooks are so plentiful these days they might as well grow on trees. There are fewer Chromeboxes, but Asus has been keeping its line updated and just announced its latest version.

AMD Launches New Ryzen 5000 Mobile Processors
AMD Launches New Ryzen 5000 Mobile Processors

AMD's Ryzen Mobile 5000 chips are now shipping in OEM systems. So what kind of performance improvements do they really offer?

How Are Process Nodes Defined?
How Are Process Nodes Defined?

We talk about process nodes a lot at wfoojjaec, but we don't always drill down into how nodes are defined or how they differ between manufacturers. Let's fix that.

Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed

Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.