Josh Bongard, an assistant professor at the University of Vermont’s Department of Computer Science, and Victor Zykov and Hod Limpton of Cornell University’s Computational Synthesis Laboratory have designed and built the Black Starfish, a four-legged robot which “automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage.”
The “self-aware” robot can track its own movements via tilt and angle sensors in the joints of its limbs. It doesn’t “know” how it has been built, but rather generates an internal model of itself by sending and receiving information to and from the sensors. The data received is fed into an optimization program called a genetic algorithm (a “digital version of natural selection”). Another genetic algorithm generates models of possible alternative gaits; when one of its limbs is damaged, the robot uses these models to act out the movements of alternative gaits, rather than actually testing out each one, to determine which gait is optimal to recover movement after the incurred damage.
Here’s the abstract from the research group’s most recent paper, which is published in the current issue of Science:
Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals.
I am of the opinion that computers, or any other type of machine, will never be capable of “thinking”. Yet some artificial intelligence theorists hold that what we call “the mind” is little more than an algorithm (a set of mathematical procedures) carried out by the brain, and that, if this algorithm can be deduced, it can then be programmed into a computer, which would itself then have a mind and the ability to think.