CoRL: Environment Creation and Management Focused on System Integration

03/03/2023
by   Justin D. Merrick, et al.
0

Existing reinforcement learning environment libraries use monolithic environment classes, provide shallow methods for altering agent observation and action spaces, and/or are tied to a specific simulation environment. The Core Reinforcement Learning library (CoRL) is a modular, composable, and hyper-configurable environment creation tool. It allows minute control over agent observations, rewards, and done conditions through the use of easy-to-read configuration files, pydantic validators, and a functor design pattern. Using integration pathways allows agents to be quickly implemented in new simulation environments, encourages rapid exploration, and enables transition of knowledge from low-fidelity to high-fidelity simulations. Natively multi-agent design and integration with Ray/RLLib (Liang et al., 2018) at release allow for easy scalability of agent complexity and computing power. The code is publicly released and available at https://github.com/act3-ace/CoRL.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset