When We First Met: Visual-Inertial Person Localization for Co-Robot Rendezvous

06/17/2020
by   Xi Sun, et al.
6

We aim to enable robots to visually localize a target person through the aid of an additional sensing modality – the target person's 3D inertial measurements. The need for such technology may arise when a robot is to meet person in a crowd for the first time or when an autonomous vehicle must rendezvous with a rider amongst a crowd without knowing the appearance of the person in advance. A person's inertial information can be measured with a wearable device such as a smart-phone and can be shared selectively with an autonomous system during the rendezvous. We propose a method to learn a visual-inertial feature space in which the motion of a person in video can be easily matched to the motion measured by a wearable inertial measurement unit (IMU). The transformation of the two modalities into the joint feature space is learned through the use of a contrastive loss which forces inertial motion features and video motion features generated by the same person to lie close in the joint feature space. To validate our approach, we compose a dataset of over 60,000 video segments of moving people along with wearable IMU data. Our experiments show that our proposed method is able to accurately localize a target person with 80.7

READ FULL TEXT

page 1

page 3

page 6

research
06/15/2016

Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures

We envision a future time when wearable cameras are worn by the masses a...
research
02/20/2018

Fusing Video and Inertial Sensor Data for Walking Person Identification

An autonomous computer system (such as a robot) typically needs to ident...
research
04/16/2019

What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis

In recent years, more and more videos are captured from the first-person...
research
07/14/2022

Inertial Hallucinations – When Wearable Inertial Devices Start Seeing Things

We propose a novel approach to multimodal sensor fusion for Ambient Assi...
research
05/23/2018

Visual-Inertial Target Tracking and Motion Planning for UAV-based Radiation Detection

This paper addresses the problem of detecting radioactive material in tr...
research
10/01/2018

Inertial-aided Motion Deblurring with Deep Networks

We propose an inertial-aided deblurring method that incorporates gyrosco...
research
12/10/2019

Snoopy: Sniffing Your Smartwatch Passwords via Deep Sequence Learning

Demand for smartwatches has taken off in recent years with new models wh...

Please sign up or login with your details

Forgot password? Click here to reset