Optimized algorithms plus cloud-based quantum computers actually work.
First it was the electron behavior in a hydrogen molecule, then beryllium dihydride joined the club. Now, quantum computers have been used to calculate some of the properties of an atomic nucleus: the deuterium nucleus to be precise.
What we are witnessing are two concurrent and useful processes. The first, which we have covered extensively, is the development and availability of quantum computers. But, I’ve not really discussed the second at all: the development of algorithms.
You see, theorists—the potential users of quantum computers—have a dilemma. Quantum computers hold a lot of promise. It is highly likely that a good quantum computer can calculate the properties of things like molecules and atomic nuclei much more efficiently than a classical computer. Unfortunately, the current generation of quantum computers, especially those that the average theorist can get access to, are rather limited. This gives the theorists a challenge: can they make computations less resource-intensive so that they can be performed on the currently available hardware?
Most of you will be thinking, well, duh, of course, this happens all the time. But it happens all the time with classical computers. What we are seeing now is that this process is being extended for quantum computing algorithms, too.
Properties of the nucleus?
The nucleus is a scary place for people like me, who prefer the gentler world of whole atoms and molecules. A nucleus consists of protons and neutrons that are bound together by the strong force. The strong force’s range is so short that protons and neutrons basically have to be within a few femtometers (10-15m) of each other before they stick together. Despite this, however, the nucleus has structure.
Picture a deuterium nucleus: it only has one proton and one neutron. The two are not stuck to each other like old leftovers at the back of your fridge, though. It is more like they are attached via a rubber band and vibrate around each other. Given a bit of energy (via an X-Ray or a gamma ray), the vibrations will get faster. The bond that holds the proton and neutron together can also snap, like an overstretched rubber band, causing the nucleus to fly apart. A sufficiently energetic gamma ray can cause this to happen.
What we are interested in knowing is how much energy it takes to go from one vibrational state to the next. We want to know the energy at which the nucleus will fall apart. We would also like to know the minimum energy of the nucleus.
Compressing the calculation
The issue with calculating these properties is that it takes a lot of resources. The energy landscape—the rubber band that holds the neutron and proton together—is a sum of, possibly, an infinite number of terms. The number of states we can examine for the two nucleons is limited only by the length of the sum used to calculate the energy landscape.
The result is that even a small calculation requires a lot of qubits.
There is a second consideration when it comes to quantum computations, though: how many and what type of operations do you need to perform to complete the calculation? It’s not just a matter of having a limited number of qubits; over time, they’ll lose their ability to hold quantum information, and the amount of time this takes depends on the hardware that’s being used. All algorithms have to work within this limitation, which is often considered in terms of what’s called computational depth (or volume).
To reduce the number of operations, the researchers noted that many two-qubit operations could be replaced by single-qubit operations. To understand how, it helps to think of a qubit as an arrow that points to a location on a sphere—we don’t know the direction that the arrow is pointing, only that it has a likely value.
A single-qubit operation rotates the arrow by a fixed amount. We still don’t know where the arrow is pointing, but we do know how much it has changed. A two-qubit operation flips the arrow of the targeted qubit, depending on the direction of the arrow of the control qubit. (Here, we have no idea where any of the arrows are pointing.)
Qubits are not perfect, though. This essentially means that the arrow doesn’t have a precise direction; there’s some margin of error in its direction. For a single-qubit operation, this is not so bad. The operations we perform in rotating it are reasonably precise, so the error does not grow too quickly. But, for two-qubit operations, the error grows more rapidly because the error in the control qubit is copied to the target qubit. So, reducing the number of qubits involved in each operation allows a quantum computer to perform longer calculations.
And that is what the researchers did. They reduced the calculation of nucleon energy levels to mostly single-qubit operations, with just a few two-qubit ones thrown in. From this, they were able to calculate the ground state energy and estimate the binding energy (the energy required to break up the nucleus) for a deuterium nucleus.
As with all quantum computations, the results are statistical in nature, so the researchers have to perform the computation many times and take the average result. In this case, the researchers made use of two quantum computers—the IBM QX5 and the Rigetti 19Q—via their publicly available cloud computing APIs. This limited the number of computations that they could perform. Despite this, they obtained results within a few percent of the experimental values.
The calculation itself is nothing special. This particular nucleus has long been solvable with classical computers. The point was to start developing ways to fit calculations on small quantum computers. And this is important, because while many companies and research labs are developing quantum computers with more qubits, they’re not increasing the number of operations we can do before a large error builds up on the qubit. That means all those new qubits have to be used to correct errors rather than perform calculations.
This does not mean that quantum computers will be limited to a low number of qubits or a low number of logic operations indefinitely. Instead, you should think of this as accepting that progress may be slow, and we’re figuring out how to use what we have already as soon as possible.
Physical Review Letters, 2018, DOI: 10.1103/PhysRevLett.120.210501