Is this a sentient machine?

Josh Bongard, an assistant professor at the University of Vermont’s Department of Computer Science, and Victor Zykov and Hod Limpton of Cornell University’s Computational Synthesis Laboratory have designed and built the Black Starfish, a four-legged robot which “automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage.”

starfish_walking.JPG

The “self-aware” robot can track its own movements via tilt and angle sensors in the joints of its limbs. It doesn’t “know” how it has been built, but rather generates an internal model of itself by sending and receiving information to and from the sensors. The data received is fed into an optimization program called a genetic algorithm (a “digital version of natural selection”). Another genetic algorithm generates models of possible alternative gaits; when one of its limbs is damaged, the robot uses these models to act out the movements of alternative gaits, rather than actually testing out each one, to determine which gait is optimal to recover movement after the incurred damage.

Here’s the abstract from the research group’s most recent paper, which is published in the current issue of Science:

Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals.

I am of the opinion that computers, or any other type of machine, will never be capable of “thinking”. Yet some artificial intelligence theorists hold that what we call “the mind” is little more than an algorithm (a set of mathematical procedures) carried out by the brain, and that, if this algorithm can be deduced, it can then be programmed into a computer, which would itself then have a mind and the ability to think.

[Part 2]

About these ads

6 thoughts on “Is this a sentient machine?

  1. Is this a sentient machine? Come on! I don’t believe for a second that you think this is ‘conscious’. A bloody sensationalist headline on a blog about about philosophy of neuroscience! This is not a tabloid, be more responsible!

  2. Kai, you say that you don’t believe that I think this machine is ‘conscious’, and you’re absolutely right. If you actually read the post, you’ll see that in the last paragraph I state very clearly my belief that machines will never be capable of ‘thinking’.

    As for the “sensationalist headline”, I am merely emphasising what the authors of the paper are implying. Did you notice the question mark at the end of the title? I see nothing wrong with an attention-grabbing title and, as far as I am concerned, I am not being irresponsible by giving the post that title. The Chambers Online Dictionary defines the word ’sentient’ as meaning “capable of sensation or feeling; conscious or aware of something”, and the robot is aware of its own movements.

    At least we both agree that this is a blog about neuroscience (albeit with too little philosophy), and not a tabloid!

  3. The question of whether this mechanism is sentient is rather like the question of whether you are bald. From your picture, you’re balder than me but not as bald as others. This mechanism is more sentient than a thermostat, and less sentient than most organisms. I think the video is compelling evidence that it is as sentient as the locomotor systems of many organisms.

    I’m unclear about the basis for your skepticism about AI. Is your claim just the contingent claim that no mechanism made up of ‘artificial’ as opposed to organic materials will ever duplicate the functionality of an uncontroversially sentient system like a human being? This may be true, but it does not touch the central point of AI: that any artificial system that did match human functionality across domains would count as intelligent. Do you wish to deny this claim? On what grounds? What are we other than unfathomably complex, integrated systems of simple mechanisms such as the one in the video? What reason is there to think that the neural systems responsible for our lowest-level locomotor behavior, including graceful recovery from damage, aren’t relevantly similar to the mechanism in this robot? And why is scaling up to human functionality via the integration of millions of such mechanisms, each devoted to different lower-level tasks, impossible?

  4. Sorry, I didn’t use the blockquote tag properly. “I’m unclear about the basis for your skepticism about AI.” Was the sentence I was responding to in the second paragraph.

    The encarta quote is:
    sentient
    Definition:
    1. conscious: capable of feeling and perception
    a sentient being
    2. responding with feeling: capable of responding emotionally rather than intellectually
    [Mid-17th century.

    Is that what you meant, Neurophilsopher? This robot is like a thermostat, just more complex, or like, say, a mammal just less complex?

    All your other questions seem based on this assumption, which is ill founded. I have read Minsky, Brooks, Pinker and Kurzweil and I am totally convinced that A.I. is already here and growing expontentially. A.I. that surpasses us (merges with us?) will happen on a timeframe measured in decades. So you can see I’m no skeptic.

    But words are funny. They are fuzzy. A.I. is a bad label for lots of reasons especially for describing the fruits of A.I. research that currently exist. But that is what we are stuck with until the A.I. theorists insist on their own labels. Sentient is another fuzzy word, like consciousness. I think they should only be used when one wants to avoid precision in communication. If the word sentient covers a complexity ranging from thermostat to humans than I’d say it can’t be used to say anything very meaningful. I title of ‘A step toward sentience?’ would have been a better title.

    Neurophilosopher, you may want to find a better dictionary. I like Encarta very much. Even though it too uses the fuzzy word conscious in it definition, at least it spells out that it means ‘capable of feeling and perception’ when it uses it:

  5. My argument against AI is philosophical. It is also one of semantics – what exactly do we mean by “thinking” and what does it mean to be sentient?. When I update the post, I’ll go into greater detail. Thanks again for your comments.

Comments are closed.