Automatic Trade-off Adaptation in Offline RL
Recently, offline RL algorithms have been proposed that remain adaptive at runtime. For example, the LION algorithm <cit.> provides the user with an interface to set the trade-off between behavior cloning and optimality w.r.t. the estimated return at runtime. Experts can then use this interface to adapt the policy behavior according to their preferences and find a good trade-off between conservatism and performance optimization. Since expert time is precious, we extend the methodology with an autopilot that automatically finds the correct parameterization of the trade-off, yielding a new algorithm which we term AutoLION.
READ FULL TEXT