PDA Interface for Robota: Language Acquisition

This project develop a language game application, in which the user can teach the robot words to describe objects in the environment or motion the robot can perform. The project uses a Compaq iPAQ-3850 Pocket-PC, provided with a FlyCAM-CF camera, the ELAN speech synthesizer and the Conversay speech Engine.

Principle: Humans and robots have different visual, tactile and auditory perceptions. To successfully transmit information, they must build a shared understanding of a vocabulary to designate the same events. This is achieved by reducing the number of features of the shared perceptual space; building, thus, a robust learning system that can handle various situations and noisy data.


People involved in this project:


 

 

The language game application on PocketPC was implemented on the mini humanoid robot Robota (Left) and on the 30 degrees of freedom Humanoid Robot DB (Right), developed by the Kawato Erato Project and part of the HRCN dept. at ATR.


Control architecture of the language acquisition game

In our language learning application, the robot learns the meaning of words by imitating the user.
A built-in module allows the robot to imitate (in mirror fashion) the user's motion of the arms and the head. The robot associates the user's vocal utterances with its visual perceptions of movement and with the motors command executed during the imitation. A speech feedback is provided when the robot has parsed keywords from the user's speech.

Once the robot has correctly learned the meaning of a word, it can then execute the motors command associated with that word: hence, performing the correct action upon verbal command. The demonstrator plays a crucial role to constrain the situation, to reduce the learning space, and to provide a pragmatic feedback. By focussing the robot's attention on the relevant features of the environment, the amount of storage required for representation is reduced, and the speed of learning is increased.

Papers

Videos