Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping

by   Ali Shafti, et al.

Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision making and actuating modalities, to create intuitive, human-in-the-loop assistive robotics. The system takes its cue from the user's gaze, to decode their intentions and implement lower-level motion actions and achieve higher level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and action grammars-based implementation of sequences of action through the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of 4.68±0.14cm. The full system is tested with 5 subjects, showing successful implementation of 100% of reach to gaze point actions and full implementation of pick and place tasks in 96%, and pick and pour tasks in 76% of cases. Finally we present a discussion on our results and what future work is needed to improve the system.


page 1

page 4


Non-invasive Cognitive-level Human Interfacing for the Robotic Restoration of Reaching Grasping

Assistive and Wearable Robotics have the potential to support humans wit...

GaGARoS: A Gaze Guided Assistive Robotic System for Daily-Living Activities

Patients suffering from tetraplegia or quadriplegia have limited body mo...

Free-View, 3D Gaze-Guided, Assistive Robotic System for Activities of Daily Living

Patients suffering from quadriplegia have limited body motion which prev...

FastOrient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping

Wearable and Assistive robotics for human grasp support are broadly eith...

Intention estimation from gaze and motion features for human-robot shared-control object manipulation

Shared control can help in teleoperated object manipulation by assisting...

Gaze Estimation on Spresense

Gaze estimation is a valuable technology with numerous applications in f...

Analyzing 3D Volume Segmentation by Low-level Perceptual Cues, High-level Cognitive Tasks, and Decision-making Processes

3D volume segmentation is a fundamental task in many scientific and medi...

Please sign up or login with your details

Forgot password? Click here to reset