*Kshirsagar, A., *Faibish, T., Hoffman, G. & Biess, A. (2022).
Lessons Learned from Utilizing Guided Policy Search for Human-Robot Handovers with a Collaborative Robot

In Proc. of the International Conference on Robotics, Automation and Artificial Intelligence (RAAI)
Full text
PDF
DOI
Source code
Share

Abstract

We evaluate the performance of Guided Policy Search (GPS), a model-based reinforcement learning method, for generating the handover reaching motions of a collaborative robot arm. In a previous work, we evaluated GPS for the same task but only in a simulated environment. This paper provides a replication of the findings in simulation, along with new insights on GPS when used on a physical robot platform. First, we find that a policy learned in simulation does not transfer readily to the physical robot due to differences in model parameters and existing safety constraints on the real robot. Second, in order to successfully train a GPS model, the robot’s workspace needs to be severely reduced, owing to the joint-space limitations of the physical robot. Third, a policy trained with moving targets results in large worst-case errors even in regions spatially close to the training target locations. Our findings motivate further research towards utilizing GPS in humanrobot interaction settings, especially where safety constraints are imposed.