Quantum computing and machine learning are among the buzz words of our days. Beyond the clear hype, there are some true basis. With the advancement of traditional computing, we are able to achieve incredible results in image related analysis based on machine learning and other fields. On the other hand, quantum physics has been some kind of difficult mystical field leading to amazing progress in math (and a lot of pseudo-sciences totally unscientific). Quantum computing is emerging as a new path to overcome some limitations of traditional computing, including physical limit to miniaturize transistors. In the last year, people have included quantum computing as layers of neural networks or treated as Bayesian naïve classifiers. In March 2020, Google announced the release of TensorFlow Quantum — a toolset for combining state-of-the-art machine learning techniques with quantum algorithm design.
In summary, the idea is to use quantum computing as a step within a classification system, but we might also think the other way around.
The other way around has been pioneered by Q-CTRL a company spread between Sydney and Los Angeles focused on quantum computing.
The paved way is to be focused on Quantum computing instead, and to use machine learning efficiently suppressing the impact of noise and imperfections in quantum hardware.
Most quantum computer hardware can perform calculations in less than one millisecond before requiring a reset due to the influence of noise, which at the moment is inferior to a low-cost laptop. The result is much worse than it sounds, and explained better in the following section.
When qu-bits, the quantum version of the classical binary bit in a quantum computer, are exposed to hardware noise, the information in them degrades very easily. This process is known as decoherence. Decoherence causes the information encoded in a quantum computer to become randomized. This is one of the reasons we are still at the infancy of quantum computing. In the screencast below, I recorded one single qubit estimate in the ideal case and the same under the effect of hardware noise. Seen from one qubit might not seem that tragic, but imagining all the qubits necessary to perform a task gives the idea of how much noisy will be the results compared to even a RaspberryPi or a mobile phone.
How do we solve decoherence? Since the late 90’s people like Andrew Steane and Peter Shor have proposed models to compensate it by introducing some kind of redundancy, which is unfeasible in practice with our current quantum computers, if you imagine a high number of bits (you will need to repeat each qubits several times).
The solution of Q-CTRL is to create firmware based on machine learning that can fix decoherence without the need of extra unfeasible hardware.
Quantum computing hardware are based on light-matter interaction (optic hardware) to perform quantum logic operations. The composition of these electromagnetic signals then implements the target quantum algorithm which can be defined/refined by machine learning tools. This cumbersome circle should reduce the decoherence. To truly understand this approach requires a decent knowledge of quantum computing which a typical machine learning expert does not have. I do my best summarizing in the following section.
The Q-CTRL solution is called BOULDER OPAL, It is a Python package, which can be easily installed by typing in the terminal
pip install qctrl
and simply importing it as
from qctrl import Qctrl
The rest of how to set the Hamiltonian, dephase, control, etc… is a topic on its own (which if you are interested you can study from the tutorial below or read the documentation). The pivotal aspect is that to achieve control, and therefore noise reduction, complex gradient-based optimizations can be obtained by using TensorFlow or other machine learning tools, and this is discussed in the following section about reinforcement learning.
Among the optimizations that can be used to control the noise, reinforcement learning has been already utilized successfully. Reinforcement learning is an area of machine learning where intelligent agents take actions in an environment in order to maximize a cumulative reward.
With reinforcement learning in quantum computing, the learner creates an optimized pulse by doing experiments with the quantum device itself. Also, reinforcement learning can discover and exploit new physical mechanisms that we are not aware of. However, the disadvantage in this is that the learner can’t tell you how a solution was found, so we will not know the physics of the noise suppression in the device.
For the people more used to machine learning than quantum computing, I’ll create a bridge between the terminology used in quantum physics and reinforcement learning: the quantum computer is considered as an environment for the learning agent. The agent is tasked with achieving a goal performing a high fidelity gate. An agent is able to make a variety of actions on the environment (in our case applying pulses to the quantum computer).
The agent learns to reach its goal by using a set of measured observables and a reward based on how close it is to its goal. Our reward is derived from the gate fidelity. The learning algorithm uses this information to improve the performance of the agent after a number of experiments.
- To understand is the environment and state the agent deploys a series of pulses to the quantum computer.
- The agent then takes this state and uses this information to decide what action to take next.
In practice, the agent takes the state and uses a neural network to decide what action to take on the next segment of the pulse we quantize the amplitude of the pulse so that the learner chooses from a finite set of options.
A full gate pulse is called an episode, a reward (in reinforcement learning term) for the agent at the end of the episode is given by the state. This allows us to boost the error signal above the measurement noise.
The aforementioned reinforcement learning can be performed with a wide variety of learners, including deep policy gradient, deep deterministic policy gradient twin, and soft actor critic. All these learners have hyper parameters that must be tuned before they can be used on a real experiment
This approach of machine learning based optimized quantum computing has shown to reduce hardware errors and improve gate fidelity: