Larger, Hybrid Optical CPUs May Be Competitive With Silicon Designs

Larger, Hybrid Optical CPUs May Be Competitive With Silicon Designs

Researchers have discovered a new method of potentially integrating optical interconnects at the chip level. If successful, such an approach could theoretically allow for a significant increase in overall performance, not to mention power savings.

Light-based computing has several intrinsic properties to recommend it. First and foremost, it’s fast. Switching an optical transistor with another optical transistor has a theoretical speed measured in femtoseconds (10-15, as compared to the pokey nanoseconds (10-9) we measure performance in today.

The problem with using light to switch light is that it’s also extremely power inefficient and typically functions best over longer distances. Hybrid devices that combine optics and electronics, using the electronics for signaling and light for actually carrying information, have been difficult to build due to significant differences in scale, as well as the energy losses incurred when switching from light to electricity and back again.

Image by IBM showing the difficulty of scaling photonic solutions up to exascale computing. (The image predates this discovery.)
Image by IBM showing the difficulty of scaling photonic solutions up to exascale computing. (The image predates this discovery.)

The researchers used a new type of more efficient photonic crystal, allowing them to create both electrical-to-optical and optical-to-electrical devices. The team built both an electro-optical modulator that transmitted 40Gb/s of data and a photoreceiver at 10Gb/s. Power consumption was dramatically lower, at just 42 attojoules per bit.

At these speeds and power consumption levels, hybrid optical/electrical systems could be potentially used in future devices to provide interconnects between chips — for example, when maintaining cache coherence between multi-core CPUs. But taking advantage of this capability would also require chips to get bigger. The optical hardware simply can’t be shrunk to the same level as conventional logic transistors.

There’s no chance of this technology being used to build a full-scale chip; a Core i7 implemented using current optical technology would measure 48m2. This is unsupported by the standard ATX form factor. But the idea that making components larger might allow us to ultimately improve performance isn’t crazy.

With Moore’s law transistor density scaling ending and Dennard scaling long since dead, the power efficiency and performance improvements from switching to optical interconnect would presumably be larger than anything still to be eked out from lower process nodes. That’s particularly likely to be true if you consider this technology is still years from adoption — and we’ll be well past 5nm by the time any plausible solution could come to market.

Continue reading

Astronomers Use Spacecraft, Distant Star to Study Neptune’s Largest Moon
Astronomers Use Spacecraft, Distant Star to Study Neptune’s Largest Moon

Thanks to the Gaia spacecraft, researchers managed to capture a "central flash" from Triton, carrying important hints about conditions on the moon.

World’s Largest SSD Claims 100TB Capacity, Multi-Processor Architecture
World’s Largest SSD Claims 100TB Capacity, Multi-Processor Architecture

Nimbus Data is claiming a new record for largest SSD, with 100TB of storage in a 3.5-inch form factor and a new multi-processor architecture.

Saudi Arabia, SoftBank Plan the World’s Largest Solar Project
Saudi Arabia, SoftBank Plan the World’s Largest Solar Project

The Saudis may partner with SoftBank to build the largest solar installation ever built — 200GW of generation, 100x larger than the next-largest facility.

Astronomers Just Found the Largest Known Neutron Star
Astronomers Just Found the Largest Known Neutron Star

Scientists think they've found a new candidate for the largest neutron star ever discovered — and it's quite a bit bigger than our previous record-holders.