How to Make Sense of Google’s Quantum Supremacy Claim
If you’ve been reading some of the sensationalist headlines about the paper Google published in Nature claiming “quantum supremacy,” you’d be forgiven for thinking the day of omniscient supercomputers and shattered security systems are nearly upon us. You may have been curious enough to wade through the paper itself to see what’s been actually achieved and how far there is to go; if so, awesome. If not, here’s a simplified explanation of the situation.
Quantum Supremacy: Say What?
To put my cards on the table, I hate the term quantum supremacy as it has been defined. To me, and to any number of mass-market media outlets, it brings up visions of quantum computers dominating the landscape. Instead, what it actually means is that a quantum computer has done something, even something not very useful, that a classical digital computer can’t simulate in a reasonable time. It’s actually pretty easy to do something that can’t be fully simulated on a traditional computer — chemical reactions, for example. What makes the quantum version interesting is that it is an early milestone for a technology that is destined to become a powerful computing paradigm.
So What Did Google Actually Do?
On the surface, that sounds like the sort of thing any geek with a 53-bit quantum computer laying around (like at IBM for example) could knock off in a weekend. But Google accomplished two other things that make its achievement unique. First, they were able to control errors in their system — a notoriously hard issue with quantum computers — sufficiently well that their outputs came quite close to the theoretical results. Second, they did the math and simulation at smaller bit lengths to be fairly sure that their error estimates were realistic. That’s key because there isn’t currently any way to verify their full 53-bit results on a traditional computer.
How Important Is This?
I’m reminded a bit of the coverage of the DARPA autonomous vehicle challenges 15 years ago. It was easy to believe that self-driving cars were just around the corner. Similarly, the fact that a quantum computer complex enough to be hard to simulate can be built is only a small — but very expensive and impressive — step in getting to a quantum computer that can be used to solve practical problems like molecular simulations, or dangerous ones like key cracking. What isn’t clear is whether we are on a truly long slog, similar to the one to create self-driving cars, or whether there are going to be some shortcuts. For example, startup PsiQ believes it can harness photonics to build a commercially viable quantum computer much sooner than competitors using more common approaches.
What About IBM’s Rebuttal?
Google immediately pointed out that IBM’s response was entirely theoretical, and challenged them to prove it. Now, you might be right to wonder whether there are better ways to spend the massive amount of time and energy required. But since Google published its data, running the simulation on Summit would have the additional benefit of validating (or not) Google’s results and their assumptions about the effects of errors.
What’s Next For Quantum Computing?
For anyone used to thinking of bit depth in conventional computing terms, 53 bits sounds pretty impressive. After all, it is more than the 32 bits we’ve lived with until recently. Except in quantum computing, those bits represent the total capacity of all the registers in a system. Those registers typically include not just all the qubits needed to represent the input and output, but sets of registers to store intermediate results and make it possible to run iterative algorithms. Even though qubits can contain a large amount of state compared to conventional bits — thanks to superposition and entanglement — they are still simply bits once you need to use their data.
Making matters worse, error rates on existing quantum computers are still high enough that reliable outcomes require combining multiple physical qubits into error-corrected logical qubits. To crack 2048-bit RSA, for example, it is estimated that 4,000 reliable, logical, qubits would be needed. In addition, the qubits would need to cohere — retain their quantum state — for longer durations than is possible now. There are other architectural issues as well. For example, in a theoretical quantum computer, any qubit could be entangled with any other in a programming step. But the physical reality of current computers precludes that. For example, Google’s Sycamore only allows adjacent qubits to be entangled (entanglement is a key property for allowing the programming of multi-qubit logical gates). That can be somewhat overcome by swapping qubits around, but that takes time, and therefore makes the coherence problem worse. There’s no shortage of investment being made in solving these problems, but there isn’t any agreed-upon time frame of how long that will take.
Continue reading
Pfizer Claims New COVID-19 Vaccine 90 Percent Effective
There have been a number of COVID-19 vaccines in development in the United States and around the world, and one of them has shown some very positive preliminary results in its Phase 3 trial. One particular vaccine developed by Pfizer and German firm BioNTech appears to be more than 90 percent effective in preventing symptomatic…
This Giant Claw Could Soon Clean Up Space Junk
Cleaning up space to prevent collisons is a tall order, but the ESA has just funded a giant space claw that could show the way forward.
DOSBox Pure Launches to Make Classic Games Easier to Play
There are a lot of amazing games these days, but they sometimes take years to develop, and they might still launch in a sorry state at release — looking at you, Cyberpunk 2077. If you want to return to a simpler time, the games of yesteryear are now easier to play thanks to the new DOXBox Pure.
Intel Claws Back Market Share From AMD in Desktop, Mobile
Intel regained market share in desktop and mobile in Q4 2020, while AMD made further inroads into server.