Learning to Transfer Human Hand Skills for Robot Manipulations

Seoul National University1, Carnegie Mellon University2
*Indicates Equal Contribution
Teaser image

Abstract

We present a method for teaching dexterous manipulation tasks to robots from human hand demonstrations. Unlike existing approaches that solely rely on kinematics information without taking into account the plausibility of robot and object interaction, our method directly infers plausible robot manipulation actions from human motion demonstrations. To address the embodiment gap between the human hand and the robot system, our approach learns a joint motion manifold that maps human hand movements, robot hand actions, and object movements in 3D, enabling us to infer one motion component from others. Our key idea is the generation of pseudo-supervision triplets, which pair human, object, and robot motion trajectories synthetically. Through real-world experiments with robot hand manipulation, we demonstrate that our data-driven retargeting method significantly outperforms conventional retargeting techniques based on end-effector alignment, effectively bridging the embodiment gap between human and robotic hands.

Video

BibTeX

BibTex Code Here