Mo Costandi

Is this a sentient machine?


Josh Bongard, an assistant professor at the University of Vermont’s Department of Computer Science, and Victor Zykov and Hod Limpton of Cornell University’s Computational Synthesis Laboratory have designed and built the Black Starfish, a four-legged robot which “automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage.”

The “self-aware” robot can track its own movements via tilt and angle sensors in the joints of its limbs. It doesn’t “know” how it has been built, but rather generates an internal model of itself by sending and receiving information to and from the sensors. The data received is fed into an optimization program called a genetic algorithm (a “digital version of natural selection”). Another genetic algorithm generates models of possible alternative gaits; when one of its limbs is damaged, the robot uses these models to act out the movements of alternative gaits, rather than actually testing out each one, to determine which gait is optimal to recover movement after the incurred damage.

Here’s the abstract from the research group’s most recent paper, which is published in the current issue of Science:

Animals sustain the ability to operate after injury by creatingqualitatively different compensatory behaviors. Although suchrobustness would be desirable in engineered systems, most machinesfail in the face of unexpected damage. We describe a robot thatcan recover from such change autonomously, through continuousself-modeling. A four-legged machine uses actuation-sensationrelationships to indirectly infer its own structure, and itthen uses this self-model to generate forward locomotion. Whena leg part is removed, it adapts the self-models, leading tothe generation of alternative gaits. This concept may help developmore robust machines and shed light on self-modeling in animals.

I am of the opinion that computers, or any other type of machine, will never be capable of “thinking”. Yet some artificial intelligence theorists hold that what we call “the mind” is little more than an algorithm (a set of mathematical procedures) carried out by the brain, and that, if this algorithm can be deduced, it can then be programmed into a computer, which would itself then have a mind and the ability to think.

[Part 2]