Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning

01/14/2022
by   Phillip Swazinna, et al.
0

Offline reinforcement learning (RL) Algorithms are often designed with environments such as MuJoCo in mind, in which the planning horizon is extremely long and no noise exists. We compare model-free, model-based, as well as hybrid offline RL approaches on various industrial benchmark (IB) datasets to test the algorithms in settings closer to real world problems, including complex noise and partially observable states. We find that on the IB, hybrid approaches face severe difficulties and that simpler algorithms, such as rollout based algorithms or model-free algorithms with simpler regularizers perform best on the datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset