Is this a sentient machine? Part 2

This should have been posted quite a while ago, as part 1 was written back in November. I concluded the first post rather vaguely, as follows:

I am of the opinion that computers, or any other type of machine, will never be capable of “thinking”. Yet some artificial intelligence theorists hold that what we call “the mind” is little more than an algorithm (a set of mathematical procedures) carried out by the brain, and that, if this algorithm can be deduced, it can then be programmed into a computer, which would itself then have a mind and the ability to think.

On the day part 1 was posted, Tad left this comment, which compelled me to elaborate on my conclusion to part 1:

I’m unclear about the basis for your skepticism about AI. Is your claim just the contingent claim that no mechanism made up of ‘artificial’ as opposed to organic materials will ever duplicate the functionality of an uncontroversially sentient system like a human being? This may be true, but it does not touch the central point of AI: that any artificial system that did match human functionality across domains would count as intelligent. Do you wish to deny this claim? On what grounds? What are we other than unfathomably complex, integrated systems of simple mechanisms such as the one in the video? What reason is there to think that the neural systems responsible for our lowest-level locomotor behavior, including graceful recovery from damage, aren’t relevantly similar to the mechanism in this robot? And why is scaling up to human functionality via the integration of millions of such mechanisms, each devoted to different lower-level tasks, impossible?

Tad’s argument is very strong, and contains two main points, which I’ll tackle one at a time. But to try to answer the question “Can machines think?” is futile, so, below, I merely set out the main arguments for and against machine intelligence.

Is your claim just the contingent claim that no mechanism made up of ‘artificial’ as opposed to organic materials will ever duplicate the functionality of an uncontroversially sentient system like a human being?

No. I regard the mind as an emergent property of the brain. Many neuroscientists think of the brain in terms of a biological computer. (This is known as the Computational Theory of Mind). According to this view, ‘mind’, or ‘thinking’, can be regarded as a series of multiple, parallel computations. It is irrelevant whether those computations are being performed by a biological machine or a mechanical one. However, it should be pointed out that there are many who believe that certain functions of the brain (especially those pertaining to consciousness) cannot be explained in terms of computation.

The idea that mind can be represented by an algorithm or algorithms, and that a computer running that algorithm would itself have a mind, is known as the ‘strong’ AI. Weak AI, on the other hand, holds that while computers may appear to be “thinking”, they are not, in fact, conscious in the way that a human is.

The argument most often cited against strong AI is the thought experiment by John Searle called the Chinese Room:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle’s argument is that there is more to ‘thinking’ than just digital computation. Computers are programmed to manipulate abstract symbols. The symbolic representations manipulated by a program are said to be ‘propositional’, i.e. they represent something in a formal, arbitrary language which neither they nor the language itself have any inherent similarity to. A program consists purely of syntax – the rules governing how components of the language (the symbols) can be manipulated. According to Searl’e argument, which I fully agree with, there is no meaning to the symbols, and computers therefore have no understanding of the procedures they are performing. They are not, therefore, ‘thinking’.

And why is scaling up to human functionality via the integration of millions of such mechanisms, each devoted to different lower-level tasks, impossible?

Thinkers such as Marvin Minsky and Ray Kurzweil would argue that with increases in the computational capacity of microprocessors, their decrease in size, etc., will in the near future enable the building of machines which will rival or even overtake the information processing ability of the human brain. And quantum computers can, in theory, perform, use linear superposition to an infinite number of calculations simultaneously. But such a machine would still be a collection of algorithms which manipulate symbols without having any understanding of those symbols.

starfish_new_09.JPGSo can the black starfish think? The starfish can be thought of as a mechanical and computational model of two interacting neural circuits – one motor, the other sensory. The ‘mind’ of the starfish consists of two algorithms, one representing a sensory circuit, and the other a motor circuit. Each algorithm has 250,000 preprogrammed simulations consisting of all the possible movements that can be generated by its limbs. But does this constitute a mind, and can the execution of the two algorithms be considered as thinking? The robot is “aware” of its own movements – does that make it conscious? What is the minimum that anything needs to be “aware” for us to say that it is “conscious”? The starfish is “self-aware”; what do we mean when we say we are “self-aware”? This is another problematic question.

The starfish has been programmed to perform the specific task of producing internal models of its own movements. The robot continuously generates internal representations of the topological arrangement of its own parts, with which it can establish how to adapt its gait to best compensate for the removal of, or damage to, a limb. The removal of part of a limb is a change in the robot’s morphology. Because internal model generation is an active, ongoing process, this morphological change is incorporated into the internal model which is thus modified accordingly.

The models, then, are composed from a preprogrammed representation of every possible arrangement of all the robot’s movable parts. The gaits used by the starfish for locomotion are not themselves preprogrammed, but are generated by the three algorithms which the robot continuously executes. The robot uses predictive forward models – the continuous execution of the mathematical procedures model, test and predict different limb arrangements to generate alternative gaits that compensate for an injury. Robots without internal models can also adjust their gait to compensate for damage, but they do so by trial and error. The starfish, on the other hand, begins adjusting its movements to adopt the best possible gait, immediately upon incorporation of the change into the internal model.

We could say that the starfish has a simple form of ‘intelligence’. It is capable of solving a single problem – how to generate a gait that compensates for incurred injuries. As such it displays predictive forward modelling, a prerequisite of complex cognitive functions. But one characteristic of ‘real’ intelligence is its versatility, which allows those who possess it to apply existing knowledge to novel situations which have never been experience before. The ‘intelligence’ of the starfish is strictly limited to solving one problem.

The question of whether or not machines has puzzled philosophers for many years. Alan Turing, the inventor of the computer, once said that the question is “too meaningless to answer”, as it invariably leads to sterile debates about semantics. What exactly do we mean by “thinking”, “mind” or “consciousness”? These are three of the vaguest concepts in neuroscience – and philosophy – and we are yet to reach a satisfactory definition of any of them. For Minsky, the way around this is to avoid a definition altogether: “Why fall into the trap of feeling that we must define old words like “mean” and “understand”? It’s great when words help us get good ideas, but not when they confuse us”. Needless to say, I find this approach unsatisfactory.

Instead of asking if machines can think, Turing devised another test for machine ‘intelligence’, whereby if a machine can hold a conversation with a person, in which the computer’s responses are indistinguishable from those of a human, then that machine can be said to be ‘intelligent.’ This is a more functional approach to the question, and is reiterated by Minsky, who defines AI as “the science of making machines do things that would require intelligence if done by men”. So, if a machine has the outward appearance of thinking, i.e. can successfully solve a problem, then it must be thinking. But that then brings us back to the weak AI argument – it does not necessarily follow that a machine is ‘thinking’ just because it appears to be. So even if a machine could successfully pass the Turing test – and, so far, no machine has – it does not mean that it is ‘thinking’.

The black starfish is a brilliantly designed machine, and the algorithms it uses to generate its internal models can be applied in a wide variety of situations. But can it think? My answer to that question has to be “no”.

References:

Bongard, J., et al (2006). Resilient machines through continuous self-modeling. Science 314: 1118-1121.

Searle, John. R. (1980). Minds, brains, and programs. Behav. Br. Sci. 3: 417-457.

Related:

7 thoughts on “Is this a sentient machine? Part 2

  1. I agree and would add that we simply don’t know that even the most complex simulations of ‘thinking’ that would rely on inorganic materials could ever ‘animate’ some form of consciousness in the way that ‘organic’ materials can animate consciousness. Though not a perfect analogy, the task of constructing a ‘thinking’ or conscious machine would be akin to constructing a mechanical calculator — maybe a super-abacus — that is sufficiently complex that we might say it is essentially the same thing as an electronic calculator or a microprocessor. The materials themselves could never do the job, but even when performing a simple arithmetic operation such as addition, the activity of the abacus is not the same as an electronic calculator or a microprocessor performing the same operation. Only certain aspects of the input and output are similar.

  2. The precise reason that the Chinese Room argument is so compelling to so many is a mystery to me. If it weren’t such an abstract argument, I’d be inclined to believe that how convincing it is to you is dispositional. I’d rather not go over it too much, but I will ask this: are you aware of the vast scale mismatch between Searle’s English speaker and the output of the room? The rules the speaker refers to would have to be so vast and voluminous that they would fill many libraries, and the speaker would have to work at millions or trillions the speed of real humans in order for the room to be able to hold a conversation in real time.

    But if you do hold the computational theory of mind, then surely you agree that with sufficiently advanced engineering it’s possible to construct androids who are indistinguishable from humans? In that case, I don’t see how you could say that such machines cannot “think”, in any usual sense of the term. Whether they have qualia is another matter, and the one issue I still have Chalmerian doubts on. But thinking isn’t consciousness, and consciousness isn’t self-awareness, and self-awareness isn’t qualia. Maybe the androids would be philosophical zombies, but I don’t see how you could say that don’t think, when you couldn’t even tell them from a human. And they would have just as much self-awareness as any human. So maybe, if they don’t have real qualia, they wouldn’t *quite* make it to being conscious. But they’d sure be close.

  3. I assume that no-one is convinced by Dan Dennett’s idea that this elusive consciousness which we have and that machines can’t have is illusory? That what we call consciousness is just “the view from within” the functioning of a comploex machine? I personally find Dennett’s dismissal of “qualia” very persuasive. I’d be interested to hear why Dennett’s arguments are not persuasive for people here.

  4. Why I don’t find dismissals of qualia very persuasive: try to explain to someone who has never tasted anything sweet what sweet is like. Explain all you want, I’m sure the person will taste something sweet and still find something new in it and will not be able to say: I already knew how this would taste, tasting it has added nothing to my repertoire of experiences.

    BTW, all this reminds me of what I think (IMHO) is the best explanation for consciousness yet: the one proosed by Maturana (“Biology of Cognition”). Even that explanation, which seems to me to be correct, does not explain qualia. In fact, reading Maturana’s stuff makes me think qualia are in principle not explainable…

  5. I also found Dennett’s handling of the problem convincing.

    I am surprised that there are people that are convinced by The Chinese Room analogy. Over the years I have come across many attacks on the analogy (a new one from Pdf23ds). Many of them are given in a “I can’t believe I have to waste time on this crap” vibe. Sharing this attitude, check out http://en.wikipedia.org/wiki/Chinese_room

    I also liked Minky’s notion of an “A brain” and a “B brain”. I think this goes a long way to explaining the “hard problem” of consciousness and also the black starfish.

  6. I know this post is old, but I had to comment.

    You have either not read Turing’s paper or you have misunderstood it. He does not say that a machine that passed the Turing test should be considered to “think”. What he’s saying is that we don’t know that anyone is capable of thought; all we have to go on is their words and actions. Therefore every time we interact with someone we are performing a Turing test. If they pass the test (as nearly all humans do), we consider them to be thinking. So, Turing asks, why should the same reasoning not be applied to machines?

    Consider a higher-stakes Turing test: you interact with an unseen partner for several minutes, and then you have to say whether you believe have been interacting with a machine or not. If you say it’s a machine, but it’s really a human, the human will be killed. There’s no penalty for saying it’s a human if it’s really a machine.

    Now if there’s any doubt at all, you will say it’s a human, because the cost of saying it’s a machine is so high. And that’s the point: the cost of treating something that’s conscious as if it isn’t is much higher than the cost of treating something that’s not conscious as if it is. Since we can never really know whether something is conscious or not, if it exhibits any signs of consciousness at all, we should treat it as if it’s conscious.

    As for the Chinese room, it is based on a fundamental misunderstanding of the Turing Test paper. Norvig and Russell demolish Searle’s arguments in their textbook on AI.

  7. The flaw in your argument is that you assume the computer and a brain are made of different components. If your talking about cells and logic gates then that is to some extent true, however if we reduce it down to the level of particles it would be possible with a quantum super computer to theoretically model the entire thing particle for particle.

    In this respect the entire function would be identical to a humans brain and so clearly this model is both sentient and a computer.

    Therefore a computer can be sentient in this case, which suggests that your argument is flawed in that either people are not sentient or a machine can be sentient.

    Quantum uncertainty effects shouldn’t have a negative effect on predicting the particle interactions as we are not trying to emulate a human brain after its creation but one pottential evolution of that human brain in time from when we start the model and so uncertainty could be built in using a random number generator.

Comments are closed.