- Pocket-PC Interface for Robots
- My current project:
The main goal of the project is to make a humanoid robot learn and reproduce new tasks by imitating a human demonstration. The task consists of manipulating objects in their environment. It is one of the most useful task, but it is also one of the most complex in humanoid robotics.
Behavioral studies with adults, babies and animals have shown that we can distinguish different levels of imitation. A grasping task demonstrated by a human can seem easy to understand and reproduce for another human. But if we look further, it is not easy to determine which of the features are essential to have a good reproduction of the task:
What must be imitated? The same object must be grasped? With the same hand? With the same posture? At the same speed? Using the same trajectory?
To answer these questions in a robotics engineer point of view, a strategy of imitation must be modeled.
The datasets produced by a system such as a humanoid robot are not of the same type (angular trajectories of limbs joints, objects trajectories in Cartesian space, visual systems,... ). They are provided with different error models and coordinate systems. To deal with these multi-dimensional datasets, a probabilistic approach is proposed.
Current state of the work:
At that time, my work focus mainly on the angular trajectory analysis and reproduction.
When the user demonstrates a task, a certain amount of data from the joint angles of the arms are collected.
Two consecutive demonstrations of the same task do not produce exactly the same dataset, but must be recognized as being the same task by the robot.
To build a system that imitates, the main goal is to extract only the relevant features, that is, the features that describe the gestures the most accurately.
These features must be sufficient to reproduce the task, fully or partially.
I am using the SL Simulation by Stefan Schaal to simulate the 30 degrees of freedom Humanoid Robot at ATR.
An environment that consists of a table and different boxes has been simulated. The robot takes both the roles of teacher and learner, in a turn-taking way.
The kinematics (i.e. angular velocity values) from the 14 DOF joints of the arms are recorded.
Studies on the coordination of joint angles of the arms, during typical human manipulation tasks, have shown that synergies exist between different joint angles velocities, that are user- and goal-independent. These synergies can be seen as sequences of events, that are robust to noise.
The idea is then to translate angular velocity values into sequences of finite states, reducing dramatically the number of data. Only the relevant informations are kept, providing a tractable subset for analysis. These relevant informations consist of angular events such as local maxima or minima in the angular positions (null angular velocity).
Techniques such as Hidden Markov Models(HMM) are then evaluated to classify these sequences, and reproduce them.
When a sequence is retrieved, techniques such as Gaussian fitting or B-spline interpolation are evaluated to see if they can approximate well enough the velocity and position profiles of the demonstrated trajectory.
The system is planned to be tested on the real DB-Humanoid Platform at the end of the year.