Robota Research

Robota and Autism

Software Developments

Hardware Developments

Robota and Autism

The ability for spontaneous and interactive imitation is a marker of the child’s normal cognitive development. Children with autism are often impaired in their ability to imitate. Research with Robota  investigates how to use the imitation game to help children learn these fundamental skills.

Software Developments

 Personal Digital Assistant (PDA) Interface for Humanoid Robots – Imitation Game

This project develops a vision-based module for detecting the motion of the user’s arm and head, and letting the mini-humanoid robot Robota imitate the motion of the user. The project uses a Compaq iPAQ-3850 Pocket-PC, provided with a FlyCAM-CF camera.

Robota imitates in mirror fashion the user’s upwards movement of the right arm

A built-in module allows the robot to imitate (in mirror fashion) the user’s motion of the arms and the head.

The camera image is divided into 9 areas to track
separately the head and the two arms

The tracking of the arms is based on optical flow detection, and the tracking of the head is based on a local template matching of the tip of the nose. This feature has symmetry and convexity properties, that ensure robust tracking during head rotation.

Calinon, S. and Billard, A. (2003) PDA Interface for Humanoid Robots. In Proceedings of the third IEEE International Conference on Humanoid Robots, Munich and Karlsruhe [link]

 Personal Digital Assistant (PDA) Interface for Robota: Language Acquisition

This project develop a language game application, in which the user can teach the robot words to describe objects in the environment or motion the robot can perform. The project uses a Compaq iPAQ-3850 Pocket-PC, provided with a FlyCAM-CF camera, the ELAN speech synthesizer and the Conversay speech Engine.

Principle: Humans and robots have different visual, tactile and auditory perceptions. To successfully transmit information, they must build a shared understanding of a vocabulary to designate the same events. This is achieved by reducing the number of features of the shared perceptual space; building, thus, a robust learning system that can handle various situations and noisy data.

Control architecture of the language acquisition game

In our language learning application, the robot learns the meaning of words by imitating the user.
A built-in module allows the robot to imitate (in mirror fashion) the user’s motion of the arms and the head. The robot associates the user’s vocal utterances with its visual perceptions of movement and with the motors command executed during the imitation. A speech feedback is provided when the robot has parsed keywords from the user’s speech.

Once the robot has correctly learned the meaning of a word, it can then execute the motors command associated with that word: hence, performing the correct action upon verbal command. The demonstrator plays a crucial role to constrain the situation, to reduce the learning space, and to provide a pragmatic feedback. By focussing the robot’s attention on the relevant features of the environment, the amount of storage required for representation is reduced, and the speed of learning is increased.

The language game application on PocketPC was implemented on the mini humanoid robot Robota (Left) and on the 30 degrees of freedom Humanoid Robot DB (Right), developed by the Kawato Erato Project and part of the HRCN dept. at ATR.

Calinon, S. and Billard, A. (2003) PDA Interface for Humanoid Robots. In Proceedings of the third IEEE International Conference on Humanoid Robots, Munich and Karlsruhe [link]

Learning Dance Motion

Demo for the exhibition Mission BioSpace, at La Cité de l’Espace, Toulouse, From 1st April 2004

Hardware Developments 

Prototype of 3 degrees of freedom pair of eyes

Prototype of a 5 DOFs arm

Prototype of a 3 DOFs and 3 fingers Hand