Learning Modular Robot Control Policies
To make a modular robotic system both capable and scalable, the controller must be equally as modular as the mechanism. Given the large number of designs that can be generated from even a small set of modules, it becomes impractical to create a new system-wide controller for each design. Instead, we construct a modular control policy that handles a broad class of designs. We take the view that a module is both form and function, i.e. both mechanism and controller. As the modules are physically re-configured, the policy automatically re-configures to match the kinematic structure. This novel policy is trained with a new model-based reinforcement learning algorithm, which interleaves model learning and trajectory optimization to guide policy learning for multiple designs simultaneously. Training the policy on a varied set of designs teaches it how to adapt its behavior to the design. We show that the policy can then generalize to a larger set of designs not seen during training. We demonstrate one policy controlling many designs with different combinations of legs and wheels to locomote both in simulation and on real robots.
READ FULL TEXT