The quantum computer revolution could break encryption — but more-secure algorithms can safeguard privacy
18 February 2022 (Heraklion, Crete) – In cybersecurity circles, they call it “Q-Day”: the day when quantum computers will break the Internet. Almost everything we do online is made possible by the quiet, relentless hum of cryptographic algorithms. These are the systems that scramble data to protect our privacy, establish our identity and secure our payments. And they work well: even with the best supercomputers available today, breaking the codes that the online world currently runs on would be an almost hopeless task.
But machines that will exploit the quirks of quantum physics threaten that entire deal. If they reach their full scale, quantum computers would crack current encryption algorithms exponentially faster than even the best non-quantum machines can. And I am reminded by something Steve King had once told me:
A real quantum computer would be extremely dangerous. As we learn to handle the currently known attacks, cybercriminals find new ways to get around, under and through our best defenses. Many of us have come to realize that there are no silver-bullet solutions to achieve cybersecurity and that no one technology, no one vendor, no one “project” will ultimately suffice. We know that what is needed is a strong cyber immune system, capable of quickly detecting unexpected threats and reacting immediately to deal with them.
But we all certainly fear that some breakthrough in science will take the vicious circle of warfare to a new level. I do see an opportunity in quantum computing where researchers have had success applying a new speed paradigm to computing by using quantum effects to solve certain problems with astronomically fewer operations. But obviously I also see the dark side.
And when it comes to cybersecurity, Steve is one you must listen to. Most of what I learned about cybersecurity came from my readings + an intensive course offered by MIT Sloan Executive + intense workshops at cybersecurity vendor FireEye. But I did not begin to understand cybersecurity until Steve King took me underwing. More about Steve here and his company here.
Yes, it can feel like we are in one of those cheesy time-travel trope where the machines that don’t yet exist endanger not only our future communications, but also our current and past ones. But as I have noted in numerous earlier posts, thieves who eavesdrop on Internet traffic are already accumulating encrypted data, which they could unlock once quantum computers become available, potentially viewing everything from our medical histories to our old banking records. Let’s say that a quantum computer is deployed in 2024. Everything you’ve done on the Internet before 2024 will be open for discussion.
But even the most bullish proponents of quantum computing say we’ll have to wait a while until the machines are powerful enough to crack encryption keys, and many doubt it will happen this decade – if at all.
But as I learned recently at the ETH Cyber Workshop in Zurich, the risk is real enough that the Internet is being readied for a makeover, to limit the damage if Q-day happens. That means switching to stronger cryptographic systems, or cryptosystems. Fortunately, decades of research in theoretical computer science has turned up plenty of candidates. These post-quantum algorithms seem impervious to attack: even using mathematical approaches that take quantum computing into account, programmers have not yet found ways to defeat them in a reasonable time. Which of these algorithms will become standard may depend in large part on a decision soon to be announced by the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland.
A few years ago NIST initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms. Those of you who subscribe to my cybersecurity posts know there has been (what seems) a zetabyte of material to plow through. But if you have, it is a Masterclass in quantum computing and cryptography. I will share some of that information throughout this post.
The U.S. National Security Agency had also announced a similar program.
NIST has winnowed down the list of submissions to 15. In the next couple of months, it will select a few winners, and then publish official versions of those algorithms. And note: similar organizations in other countries, from China to France, are doing the same.
But that will be only the beginning of a long process of updating the world’s cryptosystems – a change that will affect every aspect of our lives online, although the hope is that it will be invisible to the average Internet user. Experience shows that it could be a bumpy road: early tests by firms such as Google haven’t all run smoothly. As one of my professors at MIT noted “I think it’s something we know how to do; it’s just not clear that we’ll do it in time”.
Even if Q-Day never happens, the possibility of code-breaking quantum machines has already changed computer science – and, in particular, the ancient art of cryptography.
The birth of public-key cryptography
Armies and spies have always been able to send messages securely even when a channel – be it a messenger pigeon or a radio link – is susceptible to eavesdropping, as long as their messages were encrypted. However, until the 1970s, this required the two parties to agree on a shared secret cipher in advance.
Then, in 1976, three U.S. computer scientists, Whitfield Diffie, Martin Hellman and Ralph Merkle, came up with the revolutionary concept of public-key cryptography, which allows two people to exchange information securely even if they had no previous agreement. The idea rests on a mathematical trick that uses two numbers: one, the public key, is used to encrypt a message, and it is different from the second, the private key, used to decrypt it. Someone who wants to receive confidential messages can announce their public key to the world, say, by printing it in a newspaper. Anyone can use the public key to scramble their message and share it openly. Only the receiver knows the private key, enabling them to unscramble the information and read it.
In practice, public keys are not typically used to encrypt the data, but to securely share a conventional, symmetric key – one that both parties can use to send confidential data in either direction. (Symmetric-key systems can also be weakened by existing quantum algorithms, but not in a catastrophic way.)
For the first two decades of the Internet age from the mid-1990s, the most commonly used public-key-exchange algorithm was RSA, named after its inventors, Ron Rivest, Adi Shamir and Leonard Adleman.
RSA is based on prime numbers – whole numbers such as 17 or 53 that are not evenly divisible by any numbers except themselves and 1. The public key is the product of at least two prime numbers. Only one party knows the factors, which constitute the private key. Privacy is protected by the fact that, although multiplying two large numbers is straightforward, finding the unknown prime factors of a very large number is extremely hard.
More recently, the Internet has been transitioning away from RSA, which is vulnerable even to classical – as opposed to quantum – attacks. In 2018, the Internet Engineering Task Force (IETF), a consensus-based virtual organization that steers the adoption of security standards on a global scale, endorsed another public-key system to replace it. That system is called elliptic-curve cryptography, because its mathematics grew out of a branch of nineteenth-century geometry that studies objects called elliptic curves.
Elliptic-curve cryptography is based on calculating the nth power of an integer (which is associated with a point on the curve). Only one party knows the number n, which is the private key. Calculating the exponential of a number is easy, but given the result, it is extremely hard to find what n was. This technique is faster and more secure than RSA.
All sorts of devices, from mobile phones to cars, use public-key encryption to connect to the Internet. The technology has also spread beyond cyberspace: for example, the radio-frequency chips in everything from credit cards to security passes typically use elliptic-curve algorithms.
Breaking RSA
Just as the number of Internet users worldwide – and the use of public-key cryptosystems such as RSA – was beginning to grow exponentially, others were laying the groundwork for those algorithms’ demise. They were showing in 1994 how a quantum computer should be able to factor large numbers into primes exponentially faster than a classical computer can. One of the steps in a quantum algorithm can efficiently break an elliptic-curve key, too. As Andy Greenberg notes in his recent monumental book Sandworm about our current cyber war environment:
These were not the first quantum algorithms, but they were the first to show that quantum computers could tackle practical problems. At the time, it was largely a theoretical exercise, because quantum computers were still dreams for physicists. But later that decade, researchers at IBM performed the first proofs of principle of quantum calculations, by manipulating molecules in a nuclear magnetic resonance machine. By 2001, they had demonstrated that they could run the algorithm — but only to calculate that the prime factors of 15 are 3 and 5. Quantum-computing technology has made enormous progress since then, but running Shor’s algorithm on a large integer is still a long way off.
Still, after this breakthrough, the crypto-research world began to pay attention to the possibility of a Q-day. Researchers had already been studying alternative public-key algorithms, and the news attracted lots of talent to the field.
Lattice-based systems
The majority of the algorithms that made it to NIST’s final roster rely, directly or indirectly, on a branch of cryptography that was developed in the 1990s from the mathematics of lattices. It uses sets of points located at the crossings of a lattice of straight lines that extend throughout space. These points can be added to each other using the algebra of vectors; some can be broken down into sums of smaller vectors. If the lattice has many dimensions — say, 500 — it is very time-consuming to calculate the smallest such vectors. This is similar to the situation with prime numbers: the person who knows the short vectors can use them as a private key, but solving the problem is extremely hard for everyone else.
Lattices are very, very tough stuff to wrap your brain around but a member of my cybersecurity listserv provided this article with is quite simple, and will get you through the concept and the math.
Since the 1990s, researchers have developed a plethora of public-key encryption algorithms that either use lattices directly, or are somehow related to them. One of the earliest types, developed in 1996, is called NTRU. Its keys consist of polynomials with integer coefficients, but it is considered secure because of its theoretical similarity to lattice problems. To show that a cryptosystem is trustworthy, researchers often prove that it is at least as hard to crack as a lattice problem.
A popular approach to lattice-based cryptography is called learning with errors (LWE), which forms the basis for several of the NIST finalists. It was introduced in 2005 by computer scientist Oded Regev at New York University. In its simplest form, it relies on arithmetic. To create a public key, the person who wants to receive a message picks a large, secret number — the private key. They then calculate several multiples of that number and add random ‘errors’ to each: the resulting list of numbers is the public key. The sender adds up these whole numbers and another number that represents the message, and sends the result. To get the message back, all the receiver has to do is divide it by the secret key and calculate the remainder. It’s really high-school level of mathematics.
The profound step was Regev’s proof in 2009 that anyone who breaks this algorithm would also be able to break the seemingly more complex lattice problem. This means that LWE has the same security as lattices, but without having to deal with multi-dimensional vectors. It’s a great formulation, because it makes it easy to work with. Ironically, Regev discovered LWE during an unsuccessful attempt to find a quantum algorithm that would break the lattice problem. Sometimes failure is success.
Researchers have since worked on tackling a drawback of lattice-based systems. From my ETH workshop:
Lattice-based cryptography suffers from huge public keys. Whereas the public key of a current Internet application is the size of a tweet, lattice-based encryption typically requires keys that are as large as one megabyte or more. ‘Structured lattice’ systems use what are essentially algebraic tweaks to drastically reduce the public key’s size, but that can leave them more open to attack.
Today’s best algorithms have to strike a delicate balance between size and efficiency.
Quantum candidates
When the NSA announced its cryptographic algorithms program (as I noted above), it made an unusually candid admission that “quantum computers were a serious risk to privacy” and it made people in policy circles pay attention to the threat of Q-day. NSA doesn’t often talk about crypto publicly, so people noticed.
But NIST had been working on this since 2016 and it managed to bring in some very high level computer scientists to submit candidate post-quantum algorithms for public-key cryptography, releasing them for scrutiny by the research community. At the same time, NIST called for submissions of digital-signature algorithms — techniques that enable a web server to establish its identity, for example, to prevent scammers from stealing passwords. The same mathematical techniques that enable public-key exchanges usually apply to this problem, too, and current digital-signature systems are similarly vulnerable to quantum attacks.
Teams from academic laboratories and companies, with members from four dozen countries on six continents, submitted 82 algorithms, of which 65 were accepted. True to their creators’ nerd credentials, many of the algorithms’ names had Star Wars, Star Trek or Lord of the Rings themes, such as FrodoKEM, CRYSTALS-DILITHIUM or New Hope. The algorithms are being judged by both their security and their efficiency, which includes the speed of execution and compactness of the public keys. Any algorithms that NIST chooses to standardize will have to be royalty-free.
As soon as the algorithms were submitted, it was open season. Crypto researchers delight in breaking each other’s algorithms, and after NIST’s submissions were made public, several of the systems were quickly broken. And although NIST is a U.S. government agency, the broader, world-wide crypto community has been pitching in. It really is a worldwide effort. This means that, at the end of the process, the surviving algorithms will have gained wide acceptance. And that means the world is going to basically accept the NIST standards.
Note to my telecom members: there is a separate working group that is monitoring the NIST selection on behalf of the European Telecommunications Standards Institute, an umbrella organization for groups worldwide. There is a huge industry demand for international adoption of a mobile standard.
Still, because cryptography affects sensitive national interests, other countries are keeping a close eye – and some are cautious. The maturity of post-quantum algorithms should not be overestimated: many aspects are still at a research state. But this should not delay the adoption of post-quantum systems to strengthen current cryptography.
China is said to be planning its own selection process (gee, for some reason, nothing publicly admitted) and a few leaks indicate it has already run its own competition for post-quantum algorithms.
Updating systems
Of NIST’s 15 candidates, 9 are public-key systems and 6 are for digital signatures. Finalists include implementations of NTRU and LWE, as well as another tried-and-tested system that uses the algebra of error-correction techniques. Known as “code-based algorithms”, these systems store data with redundancy that makes it possible to reconstruct an original file after it has been slightly damaged by noise. In cryptography, the data-storage algorithm is the public key, and a secret key is needed to reconstruct an original message.
In the next few months, the institute will select two algorithms for each application. It will then begin to draft standards for one, while keeping the other as a reserve in case the first choice ends up being broken by an unexpected attack, quantum or otherwise.
Selecting and standardizing algorithms will not be the end of the story. It’s certainly a solid step to bless a candidate, but as a follow-up, the Internet has to agree on how to integrate an algorithm into existing protocols.
Both Cloudflare and Google – often in cooperation – have started running real-life tests of some post-quantum algorithms by including them in some beta versions of the Chrome browser and in server software. Testing is crucial because, for Internet communications to go smoothly, it is not enough to have perfectly compatible servers and browsers. To connect them, data must also run through network devices that might block traffic that they flag as unusual because of its unfamiliar encryption protocols. These systems can be used to prevent hacking or stop users accessing prohibited content. Antivirus software could cause similar problems.
The issues also exist on a broader, Internet-wide scale, in some countries that keep track of what users are doing. Network-security workers refer to these issues as “protocol ossification” – it has already complicated the transition from RSA, and might disrupt the roll-out of quantum-secure algorithms, too.
An early test in 2016 implemented New Hope – a structured version of LWE named after the original Star Wars movie — in a Chrome beta version, and it ran without a hitch. This trial showed that it is usable.
But a larger-scale experiment conducted in 2021 by Google on a different algorithm ran into some snags. Some Internet devices apparently “broke” – network-security parlance for a gadget that blocks a connection when a client’s browser tries to communicate with an unusual protocol. The issue could have been that the browser’s opening message was longer than expected, because it carried a large public key. Algorithms that break the Internet in this way could be shelved until these issues are resolved. Said one NIST researcher:
“Sometimes you run into situations in which some network element misbehaves when you add something new. Persuading vendors to adapt their products — something that can often be done with a simple software update — could take some nudging. This could take a while.
Still, NIST is optimistic, at least when it comes to Internet browsers. Because only a small number of companies control most browsers and many servers, all that needs to happen is that they change encryption systems. Everybody is pretty confident that once NIST and IETF specify new standards, we’ll be able to roll them out pretty quickly.
Where the transition might be trickier is the multitude of modern connected devices, such as cars, security cameras and all kinds of “smart home” machines, that suffer from protocol ossification — especially those that might have security features hardwired into their chips and that are not replaced often. It takes five to seven years to design a vehicle, and it’s going to be on the road for a decade. Is it still going to be secure ten years down the line?
Either way, initial implementations will be hybrid, using post-quantum technology for added security on top of existing systems. Both post-quantum and current encryption methods should run together for a decade before the new algorithms are used exclusively.
Which means if … if … all goes to plan, the Internet will be well into its post-quantum era by the time computing enters its quantum era.
This post-quantum Internet could some day be followed, confusingly, by a quantum Internet — meaning a network that uses the principles of quantum physics to make information exchange hacker-proof. Too much to really discuss in this post.
Researchers estimate that to break cryptosystems, quantum computers will need to have in the order of 1,000 times more computing components (qubits) than they currently do. There’s a very good chance that we’ll have a quantum computer that can do positive things way before they can break crypto.
But that is no reason to be complacent. Fully transitioning all technology to be quantum resistant will take a minimum of five years and whenever Q-day happens, there are likely to be gadgets hidden somewhere that will still be vulnerable. Even if we were to do the best we possibly can, a real quantum computer will be incredibly disruptive.