Is this a sentient machine? Part 2

This should have been posted quite a while ago, as part 1 was written back in November. I concluded the first post rather vaguely, as follows:

I am of the opinion that computers, or any other type of machine, will never be capable of “thinking”. Yet some artificial intelligence theorists hold that what we call “the mind” is little more than an algorithm (a set of mathematical procedures) carried out by the brain, and that, if this algorithm can be deduced, it can then be programmed into a computer, which would itself then have a mind and the ability to think.

On the day part 1 was posted, Tad left this comment, which compelled me to elaborate on my conclusion to part 1:

I’m unclear about the basis for your skepticism about AI. Is your claim just the contingent claim that no mechanism made up of ‘artificial’ as opposed to organic materials will ever duplicate the functionality of an uncontroversially sentient system like a human being? This may be true, but it does not touch the central point of AI: that any artificial system that did match human functionality across domains would count as intelligent. Do you wish to deny this claim? On what grounds? What are we other than unfathomably complex, integrated systems of simple mechanisms such as the one in the video? What reason is there to think that the neural systems responsible for our lowest-level locomotor behavior, including graceful recovery from damage, aren’t relevantly similar to the mechanism in this robot? And why is scaling up to human functionality via the integration of millions of such mechanisms, each devoted to different lower-level tasks, impossible?

Tad’s argument is very strong, and contains two main points, which I’ll tackle one at a time. But to try to answer the question “Can machines think?” is futile, so, below, I merely set out the main arguments for and against machine intelligence.

Is your claim just the contingent claim that no mechanism made up of ‘artificial’ as opposed to organic materials will ever duplicate the functionality of an uncontroversially sentient system like a human being?

No. I regard the mind as an emergent property of the brain. Many neuroscientists think of the brain in terms of a biological computer. (This is known as the Computational Theory of Mind). According to this view, ‘mind’, or ‘thinking’, can be regarded as a series of multiple, parallel computations. It is irrelevant whether those computations are being performed by a biological machine or a mechanical one. However, it should be pointed out that there are many who believe that certain functions of the brain (especially those pertaining to consciousness) cannot be explained in terms of computation.

The idea that mind can be represented by an algorithm or algorithms, and that a computer running that algorithm would itself have a mind, is known as the ‘strong’ AI. Weak AI, on the other hand, holds that while computers may appear to be “thinking”, they are not, in fact, conscious in the way that a human is.

The argument most often cited against strong AI is the thought experiment by John Searle called the Chinese Room:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle’s argument is that there is more to ‘thinking’ than just digital computation. Computers are programmed to manipulate abstract symbols. The symbolic representations manipulated by a program are said to be ‘propositional’, i.e. they represent something in a formal, arbitrary language which neither they nor the language itself have any inherent similarity to. A program consists purely of syntax – the rules governing how components of the language (the symbols) can be manipulated. According to Searl’e argument, which I fully agree with, there is no meaning to the symbols, and computers therefore have no understanding of the procedures they are performing. They are not, therefore, ‘thinking’.

And why is scaling up to human functionality via the integration of millions of such mechanisms, each devoted to different lower-level tasks, impossible?

Thinkers such as Marvin Minsky and Ray Kurzweil would argue that with increases in the computational capacity of microprocessors, their decrease in size, etc., will in the near future enable the building of machines which will rival or even overtake the information processing ability of the human brain. And quantum computers can, in theory, perform, use linear superposition to an infinite number of calculations simultaneously. But such a machine would still be a collection of algorithms which manipulate symbols without having any understanding of those symbols.

starfish_new_09.JPGSo can the black starfish think? The starfish can be thought of as a mechanical and computational model of two interacting neural circuits – one motor, the other sensory. The ‘mind’ of the starfish consists of two algorithms, one representing a sensory circuit, and the other a motor circuit. Each algorithm has 250,000 preprogrammed simulations consisting of all the possible movements that can be generated by its limbs. But does this constitute a mind, and can the execution of the two algorithms be considered as thinking? The robot is “aware” of its own movements – does that make it conscious? What is the minimum that anything needs to be “aware” for us to say that it is “conscious”? The starfish is “self-aware”; what do we mean when we say we are “self-aware”? This is another problematic question.

The starfish has been programmed to perform the specific task of producing internal models of its own movements. The robot continuously generates internal representations of the topological arrangement of its own parts, with which it can establish how to adapt its gait to best compensate for the removal of, or damage to, a limb. The removal of part of a limb is a change in the robot’s morphology. Because internal model generation is an active, ongoing process, this morphological change is incorporated into the internal model which is thus modified accordingly.

The models, then, are composed from a preprogrammed representation of every possible arrangement of all the robot’s movable parts. The gaits used by the starfish for locomotion are not themselves preprogrammed, but are generated by the three algorithms which the robot continuously executes. The robot uses predictive forward models – the continuous execution of the mathematical procedures model, test and predict different limb arrangements to generate alternative gaits that compensate for an injury. Robots without internal models can also adjust their gait to compensate for damage, but they do so by trial and error. The starfish, on the other hand, begins adjusting its movements to adopt the best possible gait, immediately upon incorporation of the change into the internal model.

We could say that the starfish has a simple form of ‘intelligence’. It is capable of solving a single problem – how to generate a gait that compensates for incurred injuries. As such it displays predictive forward modelling, a prerequisite of complex cognitive functions. But one characteristic of ‘real’ intelligence is its versatility, which allows those who possess it to apply existing knowledge to novel situations which have never been experience before. The ‘intelligence’ of the starfish is strictly limited to solving one problem.

The question of whether or not machines has puzzled philosophers for many years. Alan Turing, the inventor of the computer, once said that the question is “too meaningless to answer”, as it invariably leads to sterile debates about semantics. What exactly do we mean by “thinking”, “mind” or “consciousness”? These are three of the vaguest concepts in neuroscience – and philosophy – and we are yet to reach a satisfactory definition of any of them. For Minsky, the way around this is to avoid a definition altogether: “Why fall into the trap of feeling that we must define old words like “mean” and “understand”? It’s great when words help us get good ideas, but not when they confuse us”. Needless to say, I find this approach unsatisfactory.

Instead of asking if machines can think, Turing devised another test for machine ‘intelligence’, whereby if a machine can hold a conversation with a person, in which the computer’s responses are indistinguishable from those of a human, then that machine can be said to be ‘intelligent.’ This is a more functional approach to the question, and is reiterated by Minsky, who defines AI as “the science of making machines do things that would require intelligence if done by men”. So, if a machine has the outward appearance of thinking, i.e. can successfully solve a problem, then it must be thinking. But that then brings us back to the weak AI argument – it does not necessarily follow that a machine is ‘thinking’ just because it appears to be. So even if a machine could successfully pass the Turing test – and, so far, no machine has – it does not mean that it is ‘thinking’.

The black starfish is a brilliantly designed machine, and the algorithms it uses to generate its internal models can be applied in a wide variety of situations. But can it think? My answer to that question has to be “no”.

References:

Bongard, J., et al (2006). Resilient machines through continuous self-modeling. Science 314: 1118-1121.

Searle, John. R. (1980). Minds, brains, and programs. Behav. Br. Sci. 3: 417-457.

Related:

Advertisements

Is this a sentient machine?

Josh Bongard, an assistant professor at the University of Vermont’s Department of Computer Science, and Victor Zykov and Hod Limpton of Cornell University’s Computational Synthesis Laboratory have designed and built the Black Starfish, a four-legged robot which “automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage.”

starfish_walking.JPG

The “self-aware” robot can track its own movements via tilt and angle sensors in the joints of its limbs. It doesn’t “know” how it has been built, but rather generates an internal model of itself by sending and receiving information to and from the sensors. The data received is fed into an optimization program called a genetic algorithm (a “digital version of natural selection”). Another genetic algorithm generates models of possible alternative gaits; when one of its limbs is damaged, the robot uses these models to act out the movements of alternative gaits, rather than actually testing out each one, to determine which gait is optimal to recover movement after the incurred damage.

Here’s the abstract from the research group’s most recent paper, which is published in the current issue of Science:

Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals.

I am of the opinion that computers, or any other type of machine, will never be capable of “thinking”. Yet some artificial intelligence theorists hold that what we call “the mind” is little more than an algorithm (a set of mathematical procedures) carried out by the brain, and that, if this algorithm can be deduced, it can then be programmed into a computer, which would itself then have a mind and the ability to think.

[Part 2]