Larger, Hybrid Optical CPUs Could Eclipse Silicon Designs

Larger, Hybrid Optical CPUs Could Eclipse Silicon Designs

Researchers have discovered a new method of potentially integrating optical interconnects at the chip level. If successful, such an approach could theoretically allow for a significant increase in overall performance, not to mention power savings.

Light-based computing has several intrinsic properties to recommend it. First and foremost, it’s fast. Switching an optical transistor with another optical transistor has a theoretical speed measured in femtoseconds (10-15, as compared to the pokey nanoseconds (10-9) we measure performance in today.

The problem with using light to switch light is that it’s also extremely power inefficient and typically functions best over longer distances. Hybrid devices that combine optics and electronics, using the electronics for signaling and light for actually carrying information, have been difficult to build due to significant differences in scale, as well as the energy losses incurred when switching from light to electricity and back again.

Image by IBM showing the difficulty of scaling photonic solutions up to exascale computing. (The image predates this discovery.)
Image by IBM showing the difficulty of scaling photonic solutions up to exascale computing. (The image predates this discovery.)

The researchers used a new type of more efficient photonic crystal, allowing them to create both electrical-to-optical and optical-to-electrical devices. The team built both an electro-optical modulator that transmitted 40Gb/s of data and a photoreceiver at 10Gb/s. Power consumption was dramatically lower, at just 42 attojoules per bit.

At these speeds and power consumption levels, hybrid optical/electrical systems could be potentially used in future devices to provide interconnects between chips — for example, when maintaining cache coherence between multi-core CPUs. But taking advantage of this capability would also require chips to get bigger. The optical hardware simply can’t be shrunk to the same level as conventional logic transistors.

There’s no chance of this technology being used to build a full-scale chip; a Core i7 implemented using current optical technology would measure 48m2. This is unsupported by the standard ATX form factor. But the idea that making components larger might allow us to ultimately improve performance isn’t crazy.

With Moore’s law transistor density scaling ending and Dennard scaling long since dead, the power efficiency and performance improvements from switching to optical interconnect would presumably be larger than anything still to be eked out from lower process nodes. That’s particularly likely to be true if you consider this technology is still years from adoption — and we’ll be well past 5nm by the time any plausible solution could come to market.

Continue reading

New MIT AI Designs Robots On Its Own
New MIT AI Designs Robots On Its Own

The team believes RoboGrammar could point researchers in new directions, leading to more efficient and inventive designs.

Leak Shows Off Samsung’s Redesigned Galaxy S21 Family
Leak Shows Off Samsung’s Redesigned Galaxy S21 Family

Based on some new teaser videos, you can look forward to a whole bunch of cameras.

Comparison of Apple M1, A14 Shows Differences in SoC Design
Comparison of Apple M1, A14 Shows Differences in SoC Design

A new analysis of the M1 breaks down the die design versus the smartphone-class A14 SoC.

Rocket Lab’s Upcoming Reusable Rocket Is Designed for Deploying Mega-Constellations
Rocket Lab’s Upcoming Reusable Rocket Is Designed for Deploying Mega-Constellations

The company says its upcoming Neutron rocket will be ideal for deploying mega-constellations, and it'll have a reusable first-stage a la the Falcon 9.