From Hearing to Knowing

Episode 23: Crypto Is Unsafe: Today's Quantum Computing

Charlotte Season 2 Episode 23

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 20:52

Send us Fan Mail

 A quantum computer small enough to fit in a lab and fast enough to cause trouble is no longer science fiction, which means the cryptography holding the digital world together is starting to look a bit… porous. This episode walks through how we got here, what breaks first, and why your future cybersecurity strategy may involve equal parts math, engineering, and mild existential dread. 

Support the show


Today we’re going to spend some time in a space where physics, cryptography, and real‑world digital security collide. It’s a space that has been theoretical for a long time, but two papers published within twenty‑four hours of each other — one on March 31, 2026 from Oratomic and Caltech, and one on April 1, 2026 from Google Quantum AI — have shifted the conversation in a way that feels different. Not speculative. Not hypothetical. But operational.

The first paper is titled “Shor’s algorithm is possible with as few as 10,000 reconfigurable atomic qubits,” authored by Madelyn Cain, Qian Xu, Robbie King, Lewis Picard, Harry Levine, Manuel Endres, John Preskill, Hsin‑Yuan Huang, and Dolev Bluvstein. The second is “Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities,” authored by Ryan Babbush, Adam Zalcman, Craig Gidney, Michael Broughton, Tanuj Khattar, Hartmut Neven, Thiago Bergamaschi, Justin Drake, and Dan Boneh.

Together, these papers form a kind of stereo image of the near‑future quantum landscape. One shows that the number of qubits needed to run Shor’s algorithm — the algorithm that breaks RSA and elliptic‑curve cryptography — is far smaller than we thought. The other shows that on certain hardware, the algorithm can run fast enough to attack live cryptocurrency transactions. When you put these results together, you get a picture that is no longer theoretical. It’s a picture of quantum computers that are not just possible, but strategically relevant.

To understand why this matters, we need to talk briefly about Shor’s algorithm. Shor’s algorithm is a quantum algorithm discovered in 1994 that can factor large integers and solve discrete logarithms exponentially faster than classical algorithms. These two problems — factoring and discrete logarithms — are the mathematical foundations of RSA and elliptic‑curve cryptography. If you can run Shor’s algorithm at scale, you can break the cryptography that secures the internet, financial systems, firmware updates, identity documents, and, of course, cryptocurrencies.

For decades, the saving grace has been that running Shor’s algorithm at scale seemed impossible. The estimates said you’d need millions of physical qubits, and the largest quantum computers in the world had a few hundred. It was like saying, “Yes, in theory you could break the world’s cryptography, but only if you had a computer the size of a football stadium made of diamonds.” Technically true, but not something you worry about on a Tuesday.

The Oratomic and Caltech paper changes that. It shows that Shor’s algorithm can run with as few as ten thousand physical qubits — not millions. And it does this using a family of error‑correcting codes called quantum LDPC codes. LDPC stands for “low‑density parity check,” which is a fancy way of saying that each qubit participates in only a few stabilizer checks. This makes the code sparse and efficient.

If you’re not familiar with quantum error correction, here’s the simplest way to think about it: a logical qubit is the qubit you wish you had — the ideal, noise‑free one. A physical qubit is the actual, imperfect atom or superconducting circuit in the lab. Error‑correcting codes combine many physical qubits to create one logical qubit that behaves reliably. The fewer physical qubits you need per logical qubit, the more efficient your architecture becomes.

Surface codes — the most widely studied quantum error‑correcting codes — typically require hundreds or thousands of physical qubits per logical qubit. They’re robust, but they’re expensive. LDPC codes, by contrast, can encode many logical qubits in a relatively small number of physical qubits. The specific LDPC codes analyzed in the Oratomic paper include a [2610, 744, ≤16] code, a [4350, 1224, ≤20] code, and a [5278, 1480, ≤24] code. The notation means n physical qubits, k logical qubits, and distance d. The distance is a measure of how many errors the code can tolerate before it fails.

These codes achieve encoding rates around thirty percent — meaning that roughly one out of every three physical qubits becomes a logical qubit. That’s an enormous improvement over surface codes, which often have rates below five percent. And the performance is made possible by two things: nonlocal connectivity and improved decoders.

Nonlocal connectivity is where neutral‑atom systems shine. In a neutral‑atom quantum computer, each qubit is literally a single atom held in place by a tiny optical tweezer — a focused laser beam. These tweezers can be moved around like little tractor beams, allowing the atoms to be rearranged during computation. This means you can bring any two qubits together to perform a gate, or create long‑range stabilizers that span the entire code block. It’s like having a circuit board where the wires can move themselves to wherever they’re needed.

The decoders — belief propagation combined with localized statistics — achieve block error rates around 10⁻¹¹ per cycle at physical error rates of 0.1 percent. If that number feels abstract, here’s a way to think about it: a block error rate of 10⁻¹¹ means that if you ran the error‑correction cycle ten billion times, you’d expect one failure. That’s the level of reliability you need to run deep quantum circuits like Shor’s algorithm.

The architecture proposed in the Oratomic paper is divided into four zones. The memory zone holds the large LDPC block that stores most of the logical qubits. The processor zone uses a smaller high‑rate code for active computation — this is where the Toffoli‑heavy parts of Shor’s algorithm run. The operation zone performs Pauli‑product measurements, which are the backbone of code‑surgery‑based logical operations. And the resource zone distills magic states — special quantum states needed to implement non‑Clifford gates like the Toffoli.

If you’re not familiar with magic states, here’s the short version: quantum computers can perform a certain set of gates easily and fault‑tolerantly — these are called Clifford gates. But Clifford gates alone are not enough for universal quantum computation. To get universality, you need at least one non‑Clifford gate, like the T gate or the Toffoli gate. Magic states are special states that, when consumed, allow you to implement these non‑Clifford gates. Distilling magic states is like refining crude oil into jet fuel — it’s a resource‑intensive process, but once you have the fuel, you can fly.

The Oratomic paper shows that with around ten thousand to twenty‑six thousand physical qubits, you can run Shor’s algorithm on elliptic‑curve discrete logarithms — specifically the P‑256 curve — in days. RSA‑2048 takes longer, but still within the realm of “practical for a well‑funded adversary.” The key limitation is speed. Neutral‑atom systems have stabilizer measurement cycles around one millisecond. That’s slow compared to superconducting qubits, which operate in microseconds. But the qubit count is dramatically lower than previous estimates, and the architecture is physically plausible.

Now let’s turn to the Google Quantum AI paper, because it addresses a different question: not how many qubits are needed, but how fast the computation can run. The Google team provides new resource estimates for breaking the elliptic curve secp256k1, which is used in Bitcoin, Ethereum, and many other cryptocurrencies. They show that Shor’s algorithm can solve the discrete logarithm with around 1200 logical qubits and 70 to 90 million Toffoli gates. They do not publish the circuits themselves, for responsible‑disclosure reasons, but they publish a zero‑knowledge proof that the circuits exist and meet these resource bounds.

If you’re not familiar with zero‑knowledge proofs, here’s the idea: a zero‑knowledge proof lets someone prove that a statement is true without revealing why it’s true. It’s like proving you know the password to a vault without ever saying the password out loud. In this case, the Google team proves that they have constructed quantum circuits with the claimed resource counts, without revealing the circuits themselves. This prevents misuse while still allowing independent verification.

On a superconducting architecture with microsecond‑scale error‑correction cycles, the computation could run in minutes. That is fast enough to attack live transactions. Bitcoin has a ten‑minute block time. Ethereum has a twelve‑second block time. Solana has a block time under a second. A quantum computer that can run Shor’s algorithm in minutes can intercept a transaction in the mempool, derive the private key, and broadcast a fraudulent transaction before the honest one is confirmed.

This leads to a distinction the Google paper formalizes: fast‑clock versus slow‑clock quantum computers. Fast‑clock systems include superconducting qubits, silicon spin qubits, and photonic qubits. They have error‑correction cycles in the microsecond range. Slow‑clock systems include neutral atoms and ion traps. They have error‑correction cycles in the millisecond range. The difference is three orders of magnitude.

This difference determines what kinds of attacks are possible. Fast‑clock systems can perform on‑spend attacks — attacks on transactions in transit. Slow‑clock systems cannot. But slow‑clock systems can perform at‑rest attacks — attacks on public keys that have been exposed for long periods. And both types of systems can perform on‑setup attacks — attacks on protocol parameters that create reusable classical exploits.

The implications for cryptocurrencies are significant. Bitcoin is partially protected because most addresses hide the public key behind a hash. But older outputs, especially Pay‑to‑Public‑Key outputs from early mining, expose the key directly. Reused addresses also expose the key. The Google paper estimates that over two million BTC are vulnerable to at‑rest attacks. These coins cannot migrate to post‑quantum schemes because the owners have lost the keys. They are fixed targets.

Ethereum is more exposed. Its account model reveals the public key earlier. Its smart contracts rely on ECDLP in more places. And its scaling solutions, especially data availability sampling, introduce on‑setup vulnerabilities. A single quantum computation could recover the toxic waste from a trusted setup and create a reusable exploit.

The two papers together show that the threat is approaching from both directions. The Oratomic paper shows that the qubit count required for Shor’s algorithm is far lower than previously believed. The Google paper shows that the runtime on fast‑clock hardware is far shorter than previously believed. The result is a narrowing window for proactive migration to post‑quantum cryptography.

The technical heart of the threat lies in the structure of Shor’s algorithm. The modular exponentiation step dominates the cost. It is implemented using Toffoli gates. The total runtime is the Toffoli count multiplied by the error‑correction cycle time. High‑rate LDPC codes reduce the qubit count but do not reduce the cycle time. Fast‑clock hardware reduces the cycle time but does not reduce the qubit count. When you combine high‑rate codes with fast‑clock hardware, you get a machine that is both small and fast. That is the machine that can perform on‑spend attacks.

The ethical dimension is also shifting. Historically, quantum cryptanalysis has been published openly. But as the Google paper notes, publishing detailed attack circuits now carries real risk. Their solution is to publish resource estimates and a zero‑knowledge proof, but not the circuits themselves. This is a new model for responsible disclosure in quantum computing.

The broader implications extend beyond cryptocurrencies. Many systems rely on ECDLP: TLS, SSH, secure boot, firmware signing, identity documents, IoT devices, and more. The migration to post‑quantum cryptography is underway, but it is slow. And dormant assets — especially in blockchains — pose a unique challenge. They cannot be upgraded. They cannot be protected. They will eventually be vulnerable.

Quantum computers are becoming cryptographically relevant. The engineering challenges are real, but the trajectory is clear. The responsible path forward is to accelerate migration to post‑quantum cryptography, reduce public key exposure, and prepare for the policy questions that dormant assets will raise.

That is the landscape painted by these two papers. One shows that quantum computers capable of running Shor’s algorithm at scale are smaller than we thought. The other shows that they are faster than we thought. Together, they show that the era of cryptographically relevant quantum computing is no longer theoretical. It is emerging.

In short, what all of this means for global cybersecurity is that the long‑promised “quantum threat” has officially moved from the speculative fiction aisle to the “please update your systems before something terrible happens” aisle. Cryptography that once felt comfortably unbreakable now looks more like a polite suggestion. Neutral‑atom machines will eventually rummage through long‑exposed keys like raccoons in an unsecured trash bin, while fast‑clock superconducting systems may one day sprint fast enough to intercept live transactions before anyone notices. It’s not the end of the world, but it is the end of pretending we have decades to prepare. The migration to post‑quantum cryptography is no longer a research project — it’s a global hygiene issue, so let’s get scrubbing. 

Thank you for your time. Content may be edited for style, length, and dark or dry sense of humor.