Understanding Contexts Inside Robot and Human Manipulation Tasks through a Vision-Language Model and Ontology System in a Video Stream

by   Chen Jiang, et al.
University of Alberta

Manipulation tasks in daily life, such as pouring water, unfold intentionally under specialized manipulation contexts. Being able to process contextual knowledge in these Activities of Daily Living (ADLs) over time can help us understand manipulation intentions, which are essential for an intelligent robot to transition smoothly between various manipulation actions. In this paper, to model the intended concepts of manipulation, we present a vision dataset under a strictly constrained knowledge domain for both robot and human manipulations, where manipulation concepts and relations are stored by an ontology system in a taxonomic manner. Furthermore, we propose a scheme to generate a combination of visual attentions and an evolving knowledge graph filled with commonsense knowledge. Our scheme works with real-world camera streams and fuses an attention-based Vision-Language model with the ontology system. The experimental results demonstrate that the proposed scheme can successfully represent the evolution of an intended object manipulation procedure for both robots and humans. The proposed scheme allows the robot to mimic human-like intentional behaviors by watching real-time videos. We aim to develop this scheme further for real-world robot intelligence in Human-Robot Interaction.


page 1

page 3

page 5


Structured World Models from Human Videos

We tackle the problem of learning complex, general behaviors directly in...

Vision-based Robot Manipulation Learning via Human Demonstrations

Vision-based learning methods provide promise for robots to learn comple...

Constructing Dynamic Knowledge Graph for Visual Semantic Understanding and Applications in Autonomous Robotics

Interpreting semantic knowledge describing entities, relations and attri...

Safety-Aware Human-Robot Collaborative Transportation and Manipulation with Multiple MAVs

Human-robot interaction will play an essential role in many future indus...

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

We present an approach for safe and object-independent human-to-robot ha...

A Dataset of Daily Interactive Manipulation

Robots that succeed in factories stumble to complete the simplest daily ...

CLUE-AI: A Convolutional Three-stream Anomaly Identification Framework for Robot Manipulation

Robot safety has been a prominent research topic in recent years since r...

Please sign up or login with your details

Forgot password? Click here to reset