How to Make Sense of Google’s Quantum Supremacy Claim
If you’ve been reading some of the sensationalist headlines about the paper Google published in Nature claiming “quantum supremacy,” you’d be forgiven for thinking the day of omniscient supercomputers and shattered security systems are nearly upon us. You may have been curious enough to wade through the paper itself to see what’s been actually achieved and how far there is to go; if so, awesome. If not, here’s a simplified explanation of the situation.
Quantum Supremacy: Say What?
To put my cards on the table, I hate the term quantum supremacy as it has been defined. To me, and to any number of mass-market media outlets, it brings up visions of quantum computers dominating the landscape. Instead, what it actually means is that a quantum computer has done something, even something not very useful, that a classical digital computer can’t simulate in a reasonable time. It’s actually pretty easy to do something that can’t be fully simulated on a traditional computer — chemical reactions, for example. What makes the quantum version interesting is that it is an early milestone for a technology that is destined to become a powerful computing paradigm.
So What Did Google Actually Do?
On the surface, that sounds like the sort of thing any geek with a 53-bit quantum computer laying around (like at IBM for example) could knock off in a weekend. But Google accomplished two other things that make its achievement unique. First, they were able to control errors in their system — a notoriously hard issue with quantum computers — sufficiently well that their outputs came quite close to the theoretical results. Second, they did the math and simulation at smaller bit lengths to be fairly sure that their error estimates were realistic. That’s key because there isn’t currently any way to verify their full 53-bit results on a traditional computer.
How Important Is This?
I’m reminded a bit of the coverage of the DARPA autonomous vehicle challenges 15 years ago. It was easy to believe that self-driving cars were just around the corner. Similarly, the fact that a quantum computer complex enough to be hard to simulate can be built is only a small — but very expensive and impressive — step in getting to a quantum computer that can be used to solve practical problems like molecular simulations, or dangerous ones like key cracking. What isn’t clear is whether we are on a truly long slog, similar to the one to create self-driving cars, or whether there are going to be some shortcuts. For example, startup PsiQ believes it can harness photonics to build a commercially viable quantum computer much sooner than competitors using more common approaches.
What About IBM’s Rebuttal?
Google immediately pointed out that IBM’s response was entirely theoretical, and challenged them to prove it. Now, you might be right to wonder whether there are better ways to spend the massive amount of time and energy required. But since Google published its data, running the simulation on Summit would have the additional benefit of validating (or not) Google’s results and their assumptions about the effects of errors.
What’s Next For Quantum Computing?
For anyone used to thinking of bit depth in conventional computing terms, 53 bits sounds pretty impressive. After all, it is more than the 32 bits we’ve lived with until recently. Except in quantum computing, those bits represent the total capacity of all the registers in a system. Those registers typically include not just all the qubits needed to represent the input and output, but sets of registers to store intermediate results and make it possible to run iterative algorithms. Even though qubits can contain a large amount of state compared to conventional bits — thanks to superposition and entanglement — they are still simply bits once you need to use their data.
Making matters worse, error rates on existing quantum computers are still high enough that reliable outcomes require combining multiple physical qubits into error-corrected logical qubits. To crack 2048-bit RSA, for example, it is estimated that 4,000 reliable, logical, qubits would be needed. In addition, the qubits would need to cohere — retain their quantum state — for longer durations than is possible now. There are other architectural issues as well. For example, in a theoretical quantum computer, any qubit could be entangled with any other in a programming step. But the physical reality of current computers precludes that. For example, Google’s Sycamore only allows adjacent qubits to be entangled (entanglement is a key property for allowing the programming of multi-qubit logical gates). That can be somewhat overcome by swapping qubits around, but that takes time, and therefore makes the coherence problem worse. There’s no shortage of investment being made in solving these problems, but there isn’t any agreed-upon time frame of how long that will take.
Continue reading
IBM Promises 100x Faster Quantum Computing in 2021
Intel has plans to accelerate quantum workloads by up to 100 times this year, thanks to new software tools and improved support for classical and quantum computing workloads.
Astronomers Want to Design Quantum Telescopes That Span the Globe
Researchers are now discussing the possibility of designing a globe-spanning quantum telescope modeled on the successful Event Horizon Telescope (EHT) that imaged a black hole in 2019.
Google Aims to Make Quantum Computing Viable by 2029
Google is on to its next moonshot with the Quantum AI campus, where it hopes to build a useful, error-corrected quantum computer within the next decade. Ten years might sound like a long time, but it won't be easy to crack quantum computing.
IBM Ships Its First Quantum Computer Outside the United States
IBM has shipped its first quantum computer outside the United States. A second far-flung system is expected online in July.