*In a keynote speech given in late 2017, the physicist John Preskill coined the term Noisy Intermediate Scale Quantum (NISQ) technology for the kinds of quantum computers that will be available in the next few years. Here, ‘noisy’ refers to the fact that the devices will be disturbed by what is happening in their environment. For instance, small changes in temperature, or stray electric or magnetic fields, can cause the quantum information in the computer to be degraded — a process known as decoherence. To overcome this, we need to be able to perform error correction — essentially looking at the system to determine which disturbances have occurred, then reversing them.*

*Error Correction Algorithms for NISQ Machines: applicable to gate-based quantum computers,
are the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimisation Algorithm (QAOA).*

## Quantum computing: near- and far-term opportunities

“Quantum computing sounds wonderful, but isn’t it still 10 years away?” A simple question with a subtle answer. And it’s a question that is surely being asked more commonly these days, given the current excitement surrounding quantum computing (QC). Of course, the transition from from a pre-QC world to a post-QC world will not be marked by any single moment or breakthrough. Rather, it will involve a **continued process of scientific and engineering advances, with interim opportunities for practical applications arising along the way to the perceived endpoint of ‘full-scale, universal’ quantum computing.**

In a keynote speech given in late 2017, the **physicist John Preskill coined the term Noisy Intermediate Scale Quantum (NISQ) technology** for the kinds of quantum computers that will be available in the next few years. Here, ‘noisy’ refers to the fact that the devices will be disturbed by what is happening in their environment. For instance,

**small changes in temperature, or stray electric or magnetic fields, can cause the quantum information in the computer to be degraded — a process known as**To overcome this, we need to be able to perform

*decoherence*.*error correction*— essentially looking at the system to determine which disturbances have occurred, then reversing them.

‘Intermediate Scale’ refers to the fact that the number of qubits will most likely be limited to a few hundred, or perhaps a few thousand. Of course, the word ‘intermediate’ conveys the sense of there being a grander goal. **Ultimately, we are aiming for much larger devices, with several millions of qubits and error correction.** Often referred to as the *fault-tolerant *regime, this is perhaps what the question posed at the beginning of this article would consider to be *quantum computing*. The goal of this article is to explore the differences between the NISQ and fault-tolerant regimes. This will set the stage for future articles, where we will discuss the potential real-world applications of NISQ technology.

### Computing in the NISQ era

The term ‘NISQ’ is often used interchangeably with ‘near term’ when speaking of quantum computing, because** the devices that will be available in the next few years will be small, and will lack error correction.** This is **true regardless of the physical platform being used, be it superconducting qubits, or continuous variable photonic quantum computers.** As you might expect, the fact that these near-term devices will be ‘small’ and ‘noisy’ restricts what we will be able to do with them. Let’s explore why this is the case.

When we say a quantum computer is ‘small’, we are of course not referring to the physical size of the device and all of its supporting apparatus. Rather, we have in mind the number of basic information processing building blocks it contains — typically, the number of qubits. Everyone knows how gargantuan the mainframe computers of the 1950s were compared to today’s laptops. Computationally, however, your laptop is far more powerful, because the number of transistors it contains far exceeds the number in a vintage mainframe — by many orders of magnitude, in fact. You can simply work with much more information, and this allows it to run far more complex operations.

Similarly, the greater the number of qubits a quantum computer contains, the more computational power it can *potentially* deliver. As will become apparent in the remainder of this article, and subsequent articles, **the number of qubits is by no means the only metric that determines the power of a quantum computer.** Nevertheless, it does provide important limits to the problem sizes that can be tackled. For example, to compute molecular electronic configurations in chemistry, each electron orbital might be represented in quantum computer by one qubit. If the number of qubits available is limited, then the complexity of the molecules that can be simulated is correspondingly restricted.

The number of qubits is by no means the only metric that determines the power of a quantum computer.

Turning to the ‘N’ in NISQ, it might seem obvious that noise should be a degrading and limiting factor to the performance of any device, quantum or classical. Indeed, errors can occur in classical computers too, and methods for dealing with them had to be developed. A common scheme involves encoding a single, ‘logical’ bit of information in multiple ‘physical’ bits. Used together with a simple ‘majority voting’ rule for interpreting the logical state from the physical state, classical error correction can be performed.

For example, we might encode a logical 0 in three physical bits as 000, and a logical 1 as 111. Suppose at some point we then observe the state 010 — what is its corresponding logical state? Since there are two 0s and only a single 1, we would democratically interpret this as a logical 0, and the error could be corrected by flipping the 1 in the middle back to a 0. Of course, this scheme only works if the probability of there being *two* errors is entirely negligible. Otherwise, we might end up in the state 011, and using the majority voting rule we’d incorrectly conclude the logical bit value to be 1.

**If the probability of a single error occurring is p, then the probability of two (independent) errors is p².** A natural condition for the error correction scheme to work is therefore that the error probability

*p*should be much smaller than one, so that p² is very close to zero. Alternatively, if you can’t make

*p*small enough (perhaps due to technical limitations with the hardware), you might sacrifice more physical bits to encode a single logical bit. For example, if you chose ten physical bits to encode one logical bit, from the initial state 0000000000 there would need to be at least five errors (with total probability

*p⁵*) before interpreting the logical state became ambiguous.

Much of the intuition of the previous paragraphs carries over to quantum error correction schemes, with one particularly important caveat. Observing *qubits *means that we collapse their quantum state — we can’t simply look at them and ask what the logical state of three qubits is without actually changing their state, and thus the information they encode! To address this, more** sophisticated techniques have had to be developed for quantum error correction** — a story we shall tell in a future article.

**The bottom line is that even some of the leading quantum error correction schemes require a significant resource overhead, perhaps around 1000 physical qubits to encode a single logical qubit.** In the near term, where the number of qubits is limited to perhaps at most a few thousand, it follows that error-corrected quantum computation at scale will not be possible.

Crucially, however, this does not imply that NISQ technologies will not be useful for practical purposes. The rate at which errors occur in the system essentially determines how long you can use a quantum computer — or alternatively, how many quantum gates you can perform — before you can be certain that an error actually does occur. **A key focus is now to determine which kinds of computations can be done with relatively few operations (so-called shallow quantum circuits), keeping the probability of errors low, but which can nonetheless deliver an advantage over classical computation.**

**In particular, new classes of algorithms have been developed specifically for near-term quantum computers.**

**Two prominent examples, applicable to gate-based quantum computers, are the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimisation Algorithm (QAOA).** These techniques leverage both classical and quantum hardware, as illustrated in the figure below. A typical problem might consist of finding the lowest energy of a molecule, or minimising a function during training and/or inference in machine learning. The classical computer feeds a set of parameters to the quantum computer, which determine a specific sequence of gate operations to be performed — for example, by microwave pulses on superconducting qubits. This set of operations produces an output quantum state that we measure to determine some property — for instance, the total energy. The outcome of the measurement is communicated back to the classical computer, completing one loop of the computational procedure.

The classical computer then suggests a new set of parameters that it thinks will lead the quantum computer to produce a lower energy output state. With the qubits reset to the state |0>, the new circuit is then executed, and the energy is again measured. The process is repeated until the energy (or cost function value) converges to its apparent minimum.

There are several points to make about this quantum-classical hybrid model of computation. Observe that the classical computer is performing an optimisation loop: it is searching through a sea of parameters, and making update suggestions based on the energy reported from measuring the quantum state.** If the error rates in the quantum computer were sufficiently low, it may not need such external guidance in searching for the lowest energy configuration. In the NISQ era, however, the QPU will generally rely on instructions from the CPU.**

Secondly,** the quantum chip is used to output a number that would be computationally demanding — perhaps even impossible — for a classical computer to determine. Even with shallow circuits and noisy qubits, simulating the behaviour of quantum systems on a classical computer can be intractable; the only option is to use a real quantum computer!**

This idea lies at the heart of **the push to demonstrate quantum advantage (or quantum supremacy), where a quantum computer demonstrably performs a calculation that would be impossible for even the most powerful classical computers, in any reasonable amount of time.** We’ll pick up the story of quantum advantage in our next article.

Lastly, these quantum-classical hybrid algorithms exhibit some degree of robustness to errors. Suppose, for example, that the classical computer keeps suggesting circuit parameters that involve operating on a particularly noisy qubit. If errors that occur in this qubit result in significant energy penalties (or higher cost functions), the algorithm will learn not to use that qubit in subsequent computations.

Now that we understand the basic working procedure and limitations of NISQ computing, the obvious question begs:** for which kinds of problems can we use the technology?** We will explore this question in detail as we continue our focus on applications in the coming months, however the prime candidates are thought to include quantum chemistry, quantum machine learning, and quantum optimisation.

### Fault-tolerant quantum computing

Let’s move away from the land of imperfect quantum computation — where limitations on system size and noise place significant restrictions on what we can do — and familiarise ourselves with what is over the horizon, in the land of fault tolerance.

Let’s first think about the road to get there. The figure below — produced by John Martinis of Google — maps out the **qubit quality vs. quantity relationship**. The y-axis shows the value of the largest (and thus limiting) error rate in a given quantum computer. The dashed line at 10^-2 denotes what is known as the error correction

*threshold*: if the limiting error rate in the system exceeds this number, it becomes futile to keep up with the accumulation of errors, and to attempt to correct for them. In this light, it is

**perfectly clear why simply increasing the number of qubits is not helpful in itself**— as indicated in the figure by “quantity hype”.

As we drop below the error threshold, and reach beyond around 50 or 60 qubits, it quickly becomes impossible to simulate quantum circuits on a classical computer. The blue shaded region corresponds to the regime of NISQ technology, where applications in the next few years will be explored.

Pushing towards limiting error rates of around one in a thousand, and with enough physical qubits to encode several tens of logical qubits, we reach the realm of useful error-corrected quantum computing. It is here that we start to reach the holy grail — the promised land of ideal quantum computing at scale.

What do we expect to find in this promised land? A large amount of theoretical work over the years allows us to say quite a bit about the rewards in store. In particular, this is where we will be **able to run Shor’s factoring and Grover’s search algorithms*, perhaps the two canonical textbook examples of the power of quantum computing.** Arguably, a more tantalising prospect is that of digital quantum simulation, which will allow us to use a quantum computer to replicate the behaviour of any other quantum system. This is expected to facilitate the design of new materials, and even of more efficient chemical reactions.

**We also find a plethora of techniques based on the quantum linear systems algorithm* — also known as the HHL algorithm, after its founders — which provides an exponentially faster way of solving certain classes of linear equations**. This may bring dramatic boosts to many real-world engineering and mathematics problems, including machine learning on large *classical* data sets, however thus far there has been little research on practical applications of the algorithm.

A very natural application for large-scale quantum computers is to the analysis of *quantum* data. **At present, there are very few instances where we work with natively quantum data, since we neither produce it nor detect it.** However, **this will change as the use of quantum technologies becomes more widespread**. The output of a quantum computer or simulator is a quantum state, while quantum sensors may eventually allow us to collect quantum data from other sources. Processing and analysing this information will necessarily require large, fault-tolerant quantum computers. One can **envision the emergence of an entirely new ‘quantum data’ economy.**

### Looking to the near term

Having glanced into the far future, let’s bring ourselves back to where we are today. Currently, quantum computers are too small and noisy to be able to do anything a laptop can’t do. However, **in the next few years, NISQ technologies may offer a path to quantum advantage that can be deployed for important applications such as chemistry and machine learning.**

While we focused above on algorithms for gate-based quantum computers, such as VQE and QAOA, there arepotentially other means of achieving quantum advantage in the near term. This involves theuse of specialised hardware designed to perform only a single type of problem — quantum application-specific chips, if you will. Two prominent examples of such devices include quantum annealers and boson samplers.

Quantum annealers will be the focus of an upcoming article, however our next post will describe the quest to demonstrate quantum advantage — a feat that some believe to be imminent.

QWA is helping to bridge the gap between quantum and business. The go-to for providing suggestions, feedback and questions is our email address info@quantumwa.org.

** This is another story for another day, but algorithms such as Shor, Grover, and HHL actually require more than just a fault-tolerant and sufficiently large quantum computer. To maintain the speedups these algorithms offer, we also need an efficient way of loading classical data into a quantum superposition state. There are proposals for a device known as quantum random access memory (qRAM) to achieve this, however the question of how to actually build a practical qRAM remains very much open.*

from: *https://medium.com/@quantum_wa/quantum-computing-near-and-far-term-opportunities-f8ffa83cc0c9*

Published on Dec 20, 2017

## “Quantum Computing and the Entanglement Frontier” John Preskill, CalTech

Published on Mar 7, 2017

## Boson sampling

**Boson sampling** constitutes a restricted model of non-universal quantum computation introduced by S. Aaronson and A. Arkhipov.^{} It consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement a classically hard task using far fewer physical resources than a full linear-optical quantum computing setup. This makes it an outstanding candidate for demonstrating the power of quantum computation in the near term.

[…]

### The Task

Consider a multimode linear-optical circuit of *N* modes that is injected with *M* indistinguishable single photons (*N>M*). Then, the photonic implementation of the boson sampling task consists of generating a sample from the probability distribution of single-photon measurements at the output of the circuit. Specifically, this requires reliable sources of single photons (currently the most widely used ones are parametric down-conversion crystals), as well as a linear interferometer. The latter can be fabricated, e.g., with fused-fiber beam splitters, through silica-on-silicon or laser-written integrated interferometers, or electrically and optically interfaced optical chips.^{} Finally, the scheme also necessitates high efficiency single photon-counting detectors, such as those based on current-biased superconducting nanowires, which perform the measurements at the output of the circuit. Therefore, based on these three ingredients, the boson sampling setup does not require any ancillas, adaptive measurements or entangling operations, as does e.g. the universal optical scheme by Knill, Laflamme and Milburn (the **KLM** scheme). This makes it a non-universal model of quantum computation, and reduces the amount of physical resources needed for its practical realization.

more: *https://en.wikipedia.org/wiki/Boson_sampling*

You must be logged in to post a comment.