A Self-Supervised Miniature One-Shot Texture Segmentation (MOSTS) Model for Real-Time Robot Navigation and Embedded Applications

06/15/2023
by   Yu Chen, et al.
0

Determining the drivable area, or free space segmentation, is critical for mobile robots to navigate indoor environments safely. However, the lack of coherent markings and structures (e.g., lanes, curbs, etc.) in indoor spaces places the burden of traversability estimation heavily on the mobile robot. This paper explores the use of a self-supervised one-shot texture segmentation framework and an RGB-D camera to achieve robust drivable area segmentation. With a fast inference speed and compact size, the developed model, MOSTS is ideal for real-time robot navigation and various embedded applications. A benchmark study was conducted to compare MOSTS's performance with existing one-shot texture segmentation models to evaluate its performance. Additionally, a validation dataset was built to assess MOSTS's ability to perform texture segmentation in the wild, where it effectively identified small low-lying objects that were previously undetectable by depth measurements. Further, the study also compared MOSTS's performance with two State-Of-The-Art (SOTA) indoor semantic segmentation models, both quantitatively and qualitatively. The results showed that MOSTS offers comparable accuracy with up to eight times faster inference speed in indoor drivable area segmentation.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 7

page 8

research
02/03/2019

Real-Time Freespace Segmentation on Autonomous Robots for Detection of Obstacles and Drop-Offs

Mobile robots navigating in indoor and outdoor environments must be able...
research
11/13/2020

Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis

Analyzing scenes thoroughly is crucial for mobile robots acting in diffe...
research
12/04/2021

Toward Practical Self-Supervised Monocular Indoor Depth Estimation

The majority of self-supervised monocular depth estimation methods focus...
research
03/11/2022

Efficient and Robust Semantic Mapping for Indoor Environments

A key proficiency an autonomous mobile robot must have to perform high-l...
research
08/26/2022

The Foreseeable Future: Self-Supervised Learning to Predict Dynamic Scenes for Indoor Navigation

We present a method for generating, predicting, and using Spatiotemporal...
research
03/09/2017

Fast and Robust Detection of Fallen People from a Mobile Robot

This paper deals with the problem of detecting fallen people lying on th...
research
04/05/2022

iSDF: Real-Time Neural Signed Distance Fields for Robot Perception

We present iSDF, a continual learning system for real-time signed distan...

Please sign up or login with your details

Forgot password? Click here to reset