Rajesh Rao, an associate professor of computer science and engineering at the University of Washington’s Neural Systems Laboratory, has developed a brain-machine interface (BMI) which can be used to control the movements of a small humanoid robot.
The device is non-invasive – it is based on electroencephalography (EEG), and consists of a cap fitted with 32 electrodes. The cap gathers electrical signals (event-related potentials) from the surface of the motor and premotor cortices and sends them to the robot. Currently, the device can only be used to convey basic instructions, such as which direction to move in, and to pick up an object, to the robot. This is because it detects the brain’s electrical activity only indirectly from the electrodes on the scalp, and not from within the brain itself.
In the film clip below, one of Rao’s graduate students demostrates the device. The student and the robot are in different rooms withinin the same building, but they could be separated by any distance as long as the two locations are communicating via an internet connection. Visual feedback from the robot is provided via two cameras attached to the top of its head. The student specifies which object he wants the robot to pick up by focusing on it; the BCI detects the visually-evoked response of the brain to the focusing, and communicates it to the robot:
The robot may look primitive, but it has apparently ‘learnt’ how to walk by imitating humans. The kinematics of human locomotion from a demonstration (top row of images on the left) were conveyed to the robot using a motion capture system. The second and third rows of images show simulations of the robot’s movements before and after learning, respectively, and the bottom row shows the movements of the real robot. The algorithm used estimates the optimal motor commands needed to imitate the gait, so that the complexity of the required computations is reduced.
Rao says that the work “suggests that one day we might be able to use semi-autonomous robots for such jobs as helping disabled people or performing routine tasks in a person’s home.” To achieve this, more sophisticated control of the robot, and greater interaction of the robot with its environment, are needed. He and his colleagues now hope to make the robot’s behaviour adaptive to its environment, by equipping it to move from one room to another and to negotiate obstacles in its path.