Resource frugal optimizer for quantum machine learning

11/09/2022
by   Charles Moussa, et al.
0

Quantum-enhanced data science, also known as quantum machine learning (QML), is of growing interest as an application of near-term quantum computers. Variational QML algorithms have the potential to solve practical problems on real hardware, particularly when involving quantum data. However, training these algorithms can be challenging and calls for tailored optimization procedures. Specifically, QML applications can require a large shot-count overhead due to the large datasets involved. In this work, we advocate for simultaneous random sampling over both the dataset as well as the measurement operators that define the loss function. We consider a highly general loss function that encompasses many QML applications, and we show how to construct an unbiased estimator of its gradient. This allows us to propose a shot-frugal gradient descent optimizer called Refoqus (REsource Frugal Optimizer for QUantum Stochastic gradient descent). Our numerics indicate that Refoqus can save several orders of magnitude in shot cost, even relative to optimizers that sample over measurement operators alone.

READ FULL TEXT
research
06/19/2022

Laziness, Barren Plateau, and Noise in Machine Learning

We define laziness to describe a large suppression of variational parame...
research
03/22/2022

Toward Physically Realizable Quantum Neural Networks

There has been significant recent interest in quantum neural networks (Q...
research
08/23/2021

Adaptive shot allocation for fast convergence in variational quantum algorithms

Variational Quantum Algorithms (VQAs) are a promising approach for pract...
research
10/02/2019

Stochastic gradient descent for hybrid quantum-classical optimization

Within the context of hybrid quantum-classical optimization, gradient de...
research
10/13/2022

Noise can be helpful for variational quantum algorithms

Saddle points constitute a crucial challenge for first-order gradient de...
research
02/14/2018

L4: Practical loss-based stepsize adaptation for deep learning

We propose a stepsize adaptation scheme for stochastic gradient descent....
research
03/06/2023

Towards provably efficient quantum algorithms for large-scale machine-learning models

Large machine learning models are revolutionary technologies of artifici...

Please sign up or login with your details

Forgot password? Click here to reset