Teleoperation of a humanoid NAO robot using 3d motion capture & machine learning

We present and evaluate a novel method for tele-operating a humanoid robot via a full body motion capture suit. Our method does not use any a priori analytical or mathematical modeling (e.g. forward or inverse kinematics) of the robot, and thus this approach could be applied to the calibration of any human-robot pairing, regardless of differences in physical embodiment due to the human’s body, the motion capture device, and/or the robot’s morphology. Our approach involves training a feed-forward neural network for each DOF on the robot to learn a mapping between sensor data from the motion capture suit and the angular position of the robot actuator to which each neural network is allocated. To collect data for the learning process, the robot leads the human operator through a series of paired synchronised movements which capture both the operator’s motion capture data and the robot’s actuator data. Particle swarm optimisation is then used to train each of the neural networks. The results of our experiments demonstrate that this approach provides a fast, effective and flexible method for teleoperation of a humanoid robot.

This work was accepted for publication as: C. Stanton, A. Bogdanovych, E. Ratanasena: Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning. In proceedings of Australasian Conference on Robotics and Automation (ACRA 2012), Wellington, New Zealand, 3-5 December 2012. The paper is available at: http://staff.scm.uws.edu.au/~anton/Publications/acra_2012.pdf