Recursive Experts: An Efficient Optimal Mixture of Learning Systems in Dynamic Environments

09/19/2020
by   Kaan Gokcesu, et al.
0

Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a 'belief' (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost or loss (conversely, utility or gain). The goal is to reach an objective by exploiting the temporal relation inherent to the nature's feedback (state). By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications. However, if the framework of the problem is not stationary, i.e., the nature's state sometimes changes arbitrarily, the past cumulative belief revision done by the system may become useless and the system may fail if it lacks adaptivity. While this adaptivity can be directly implemented in specific cases (e.g., convex optimization), it is mostly not straightforward for general learning tasks. To this end, we propose an efficient optimal mixture framework for general sequential learning systems, which we call the recursive experts for dynamic environments. For this purpose, we design hyper-experts that incorporate the learning systems at our disposal and recursively merge in a specific way to achieve minimax optimal regret bounds up to constant factors. The multiplicative increases in computational complexity from the initial system to our adaptive system are only logarithmic-in-time factors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset