Model Predictive Control with Gaussian Processes for Flexible Multi-Modal Physical Human Robot Interaction
Physical human-robot interaction can improve human ergonomics, task efficiency, and the flexibility of automation, but often requires application-specific methods to detect human state and determine robot response. At the same time, many potential human-robot interaction tasks involve discrete modes, such as phases of a task or multiple possible goals, where each mode has a distinct objective and human behavior. In this paper, we propose a novel method for multi-modal physical human-robot interaction that builds a Gaussian process model for human force in each mode of a collaborative task. These models are then used for Bayesian inference of the mode, and to determine robot reactions through model predictive control. This approach enables optimization of robot trajectory based on the belief of human intent, while considering robot impedance and human joint configuration, according to ergonomic- and/or task-related objectives. The proposed method reduces programming time and complexity, requiring only a low number of demonstrations (here, three per mode) and a mode-specific objective function to commission a flexible online human-robot collaboration task. We validate the method with experiments on an admittance-controlled industrial robot, performing a collaborative assembly task with two modes where assistance is provided in full six degrees of freedom. It is shown that the developed algorithm robustly re-plans to changes in intent or robot initial position, achieving online control at 15 Hz.
READ FULL TEXT