SACPlanner: Real-World Collision Avoidance with a Soft Actor Critic Local Planner and Polar State Representations

03/21/2023
by   Khaled Nakhleh, et al.
0

We study the training performance of ROS local planners based on Reinforcement Learning (RL), and the trajectories they produce on real-world robots. We show that recent enhancements to the Soft Actor Critic (SAC) algorithm such as RAD and DrQ achieve almost perfect training after only 10000 episodes. We also observe that on real-world robots the resulting SACPlanner is more reactive to obstacles than traditional ROS local planners such as DWA.

READ FULL TEXT

page 2

page 5

page 6

research
01/30/2023

PAC-Bayesian Soft Actor-Critic Learning

Actor-critic algorithms address the dual goals of reinforcement learning...
research
12/13/2021

Multi-agent Soft Actor-Critic Based Hybrid Motion Planner for Mobile Robots

In this paper, a novel hybrid multi-robot motion planner that can be app...
research
08/05/2019

DoorGym: A Scalable Door Opening Environment And Baseline Agent

Reinforcement Learning (RL) has brought forth ideas of autonomous robots...
research
02/02/2023

MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion Control in Real Networks

Fast and efficient transport protocols are the foundation of an increasi...
research
09/23/2022

Safe Real-World Reinforcement Learning for Mobile Agent Obstacle Avoidance

Collision avoidance is key for mobile robots and agents to operate safel...
research
09/13/2023

Curriculum-based Sensing Reduction in Simulation to Real-World Transfer for In-hand Manipulation

Simulation to Real-World Transfer allows affordable and fast training of...
research
12/20/2022

Variational Quantum Soft Actor-Critic for Robotic Arm Control

Deep Reinforcement Learning is emerging as a promising approach for the ...

Please sign up or login with your details

Forgot password? Click here to reset