Information-theoretic bounds on quantum advantage in machine learning

01/07/2021
by   Hsin-Yuan Huang, et al.
0

We study the complexity of training classical and quantum machine learning (ML) models for predicting outcomes of physical experiments. The experiments depend on an input parameter x and involve the execution of a (possibly unknown) quantum process ℰ. Our figure of merit is the number of runs of ℰ during training, disregarding other measures of runtime. A classical ML model performs a measurement and records the classical outcome after each run of ℰ, while a quantum ML model can access ℰ coherently to acquire quantum data; the classical or quantum data is then used to predict outcomes of future experiments. We prove that, for any input distribution 𝒟(x), a classical ML model can provide accurate predictions on average by accessing ℰ a number of times comparable to the optimal quantum ML model. In contrast, for achieving accurate prediction on all inputs, we show that exponential quantum advantage is possible for certain tasks. For example, to predict expectation values of all Pauli observables in an n-qubit system ρ, we present a quantum ML model using only 𝒪(n) copies of ρ and prove that classical ML models require 2^Ω(n) copies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset