Synthesized Trust Learning from Limited Human Feedback for Human-Load-Reduced Multi-Robot Deployments

by   Yijiang Pang, et al.

Human multi-robot system (MRS) collaboration is demonstrating potentials in wide application scenarios due to the integration of human cognitive skills and a robot team's powerful capability introduced by its multi-member structure. However, due to limited human cognitive capability, a human cannot simultaneously monitor multiple robots and identify the abnormal ones, largely limiting the efficiency of the human-MRS collaboration. There is an urgent need to proactively reduce unnecessary human engagements and further reduce human cognitive loads. Human trust in human MRS collaboration reveals human expectations on robot performance. Based on trust estimation, the work between a human and MRS will be reallocated that an MRS will self-monitor and only request human guidance in critical situations. Inspired by that, a novel Synthesized Trust Learning (STL) method was developed to model human trust in the collaboration. STL explores two aspects of human trust (trust level and trust preference), meanwhile accelerates the convergence speed by integrating active learning to reduce human workload. To validate the effectiveness of the method, tasks "searching victims in the context of city rescue" were designed in an open-world simulation environment, and a user study with 10 volunteers was conducted to generate real human trust feedback. The results showed that by maximally utilizing human feedback, the STL achieved higher accuracy in trust modeling with a few human feedback, effectively reducing human interventions needed for modeling an accurate trust, therefore reducing human cognitive load in the collaboration.


page 1

page 3

page 4


Repairing Human Trust by Promptly Correcting Robot Mistakes with An Attention Transfer Model

In human-robot collaboration (HRC), human trust in the robot is the huma...

Can a Robot Trust You? A DRL-Based Approach to Trust-Driven Human-Guided Navigation

Humans are known to construct cognitive maps of their everyday surroundi...

Maximizing BCI Human Feedback using Active Learning

Recent advancements in Learning from Human Feedback present an effective...

Trust as Extended Control: Active Inference and User Feedback During Human-Robot Collaboration

To interact seamlessly with robots, users must infer the causes of a rob...

Robot Capability and Intention in Trust-based Decisions across Tasks

In this paper, we present results from a human-subject study designed to...

Enabling and Assessing Trust when Cooperating with Robots in Disaster Response (EASIER)

This paper presents a conceptual overview of the EASIER project and its ...

Trust Aware Emergency Response for A Resilient Human-Swarm Cooperative System

A human-swarm cooperative system, which mixes multiple robots and a huma...

Please sign up or login with your details

Forgot password? Click here to reset