Learning from many trajectories

03/31/2022
by   Stephen Tu, et al.
0

We initiate a study of supervised learning from many independent sequences ("trajectories") of non-independent covariates, reflecting tasks in sequence modeling, control, and reinforcement learning. Conceptually, our multi-trajectory setup sits between two traditional settings in statistical learning theory: learning from independent examples and learning from a single auto-correlated sequence. Our conditions for efficient learning generalize the former setting–trajectories must be non-degenerate in ways that extend standard requirements for independent examples. They do not require that trajectories be ergodic, long, nor strictly stable. For linear least-squares regression, given n-dimensional examples produced by m trajectories, each of length T, we observe a notable change in statistical efficiency as the number of trajectories increases from a few (namely m ≲ n) to many (namely m ≳ n). Specifically, we establish that the worst-case error rate this problem is Θ(n / m T) whenever m ≳ n. Meanwhile, when m ≲ n, we establish a (sharp) lower bound of Ω(n^2 / m^2 T) on the worst-case error rate, realized by a simple, marginally unstable linear dynamical system. A key upshot is that, in domains where trajectories regularly reset, the error rate eventually behaves as if all of the examples were independent altogether, drawn from their marginals. As a corollary of our analysis, we also improve guarantees for the linear system identification problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset