I recently looked at the case for artificial intelligence
and I revisited the arguments laid out by John R Searle in his argument that so
called strong artificial intelligence cannot evolve from a computer program. Using the Chinese Room (CR) argument advanced
by John R Searle, I examine whether his contention that the mind is not a
computer program is true in the context of what constitutes artificial
intelligence (AI) and thus the computational theory of the mind is false. In order to do this I will establish the
agreed basis of artificial intelligence, outline the Turing Test for evaluating
machine intelligence, describe the CR thought experiment and then evaluate the
Seale’s principle claim that the CR experiment is merely a syntactic process
requiring no semantic understanding and demonstrating strong AI to be false.
In order to evaluate Searle’s CR argument we first need to
have a clear understanding of artificial intelligence and in particular the
form of AI, so-called ‘Strong AI’, at which Searle is directing his thought
experiment. Strong AI is the
philosophical thesis that appropriately programmed computers have minds in
exactly the same sense that we do.
During the 1950s, Alan Turing proposed a simple test to
evaluate whether a machine is making an adequate simulation of the human
mind. The ‘Turing Test’ states that ‘if
a computer can pass for a human in online chat, we should grant that it is
intelligent’.
This leads us to divide AI into four specific categories:-
AI1 Computers are capable of thought;
AI2 Only computers are capable of thought;
AI3 A machine can think simply as a result
of instantiating a computer program;
AI4 Computer models are useful in the study
of the mind.
Clearly AI4 is the weakest form of AI and is not considered
to be particularly controversial. However, the AI argument builds from AI3
through to AI1 as the claims become stronger. Indeed if AI2 is true then we are
all computers! The combination of AI1
and AI3 suggest that a thinking machine is possible and all we need to do is
write the appropriate program and run it to demonstrate a thinking machine
(Wilkinson, 2005, pg 100). Strong AI of this form suggests that a suitably
programmed computer can understand natural language and actually have other
mental attributes similar to humans whose abilities they mimic. For this reason
if the AI3 form of AI can be undermined the entire strong AI argument is
invalid. Consequently AI3 is the form of AI that Searle considers ‘strong AI’
and which his CR argument is intended to show to be false and hence the mind is
not a computer.
Searle’s CR experiment is a thought experiment which
imagines an English speaker who knows no Chinese, locked in a room full of
boxes of Chinese symbols (the database) together with a book of instructions
for manipulating the symbols (the program).
Imagine the people outside the room send in other Chinese symbols (data
input) which, unknown to the person in the room are questions in Chinese. By following the instruction book, the man is
able to pass out Chinese symbols and answer the questions correctly (the
output). Searle contends that the man in
the CR has actually passed the Turing Test for understanding Chinese without having
to understand a word of Chinese. Put
simply the Searle argument may be sum up as follows:-
Premise 1. If
‘strong AI’ is true, there exists a program for Chinese such that if any
computing system runs that program, that system may be considered to understand
Chinese;
Premise 2. The
program may be operated by anyone, without understanding Chinese;
Conclusion. Therefore,
‘Strong AI’ is false (from premise 2).
Premise 2 conforms to the CR and as such the inevitable
conclusion is that running a program does not constitute understanding. Searle’s central claim is that the CR
experiment shows that it is not possible for syntax to result in semantics. Syntax in this context refers to the way in
which the Chinese symbols are manipulated as opposed to semantics which relates
to the meaning of the symbols.
Essentially the program is purely a syntactical symbol system which
merely manipulates the symbols and entirely lacks semantic properties without
any understanding of their meaning (Wilkinson, 2005, pg 105).
Before I consider the case against Searle’s CR experiment,
it would be useful to describe the programming process. For a process to be programmable, the process
must be able to be constructed as an algorithm.
For a process to be algorithmic the following must apply:-
1.
Every step is specifiable entirely without
ambiguity;
2.
At every step, there is no ambiguity about what
the next step must be: no insight, inspiration or creativity is needed;
3.
Provided each step is correctly executed, the
procedure will produce the desired result in a finite number of steps.
All computers, whether based on traditional step-by-step von
Neumann architecture or parallel processing, are programmed in an algorithmic
manner (Wilkinson, 2005, pgs 100 - 101).
The key issue is the ability to ‘frame’ the question in an algorithmic
format to enable the computer to work.
Searle considers the computer program based on the algorithmic
formulation to be purely syntactic, therefore, unable to be semantic and hence
computer programs cannot produce minds.
Searle’s contention may be shown as follows:-
Premise 1. Programs
are purely formal algorithmic processes (syntactic);
Premise 2. Human
minds have mental contents (semantics);
Premise 3. Syntax
by itself is neither constitutive of, nor sufficient for, semantic content;
Conclusion. Therefore,
programs by themselves are not constitutive or sufficient for minds.
Premise 3 supports the CR thought experiment.
Now there are a number of criticisms to Searle’s CR argument
against Strong AI, however, they are essentially variants on the so-called
‘System Reply’. Essentially the counter
argument is that no single element should be considered as the ‘mind’ of the CR,
and certainly the man at the centre running the process does not understand the
Chinese language, but rather it is the whole system operating together that
develops an understanding of Chinese and hence a semantic grasp of the Chinese
symbols. Searle replies that if the man
in the room were to memorise the instruction book and the database of Chinese
characters and become the entire system, he still would not be able to attach
any meaning to the formal symbols even though he is now the entire system.
Others criticise the fact that the man in the room is
deprived of any sensorimotor connection to the world and that these are vital
missing factors. Searle counters that
the Chinese characters could be the outputs of a television camera and the
outgoing symbols be the commands to a robotic arm. He now has a connection, but
still no understanding.
Others suggest that the CR does not model the way the brain works
as complex neural network. Searle
counters by applying Ned Block’s Chinese Gym thought experiment, whereby
millions of people in a huge gym are connected to one another by walkie-talkie
radios passing instructions to one another acting as individual neurons participating
in a neural network. On the other hand,
the same issue regarding semantics applies in this case as well. The Chinese Gym does not understand Chinese
any more than the CR.
Daniel Dennett’s challenge to the CR is based on the issue
of complexity. He contends that Searle’s
experiment is far too simple and that this is the reason for no apparent
understanding being present within the system.
Dennett argues that with increasing complexity you will get unexpected
results resulting in radical changes in behaviour known as emergent properties
which only occur when a system is sufficiently complex. Such radical changes in behaviour or
properties of complex systems are common in the natural (Wilkinson, 2007, pg
113-114). Consequently, Dennett argues
that any system capable of conversing in Chinese would most likely be far more
complex than the CR experiment and it would be difficult to say confidently
that the computer system did not understand Chinese. Searle contends that running increasing
complex programs are no more than more complex algorithms and are just as
syntactic as previously described and not capable of semantics however complex. Whilst I agree that a complex computer
program is just an assembly of multiple simple routines, increasing complexity
often leads to increased computing capability and can result in unexpected
capabilities and even outcomes.
In the same vein, the challenge posed by Patricia and Paul
Churchland suggests that the issue is one of interpretation and speed, in that
the CR experiment is operating at a very much slower speed than the brain
operates and as such ‘understanding’ cannot be detected. They use the analogy based on Maxwell’s
theory of light being made up of electro-magnetic waves. In their thought experiment, a man stands in
a darkened room and waves a magnet up and down.
Although light is indeed made up of electro-magnetic waves, no light
would be detected. The man waving the
magnet does not disprove Maxwell’s theory that light consists of
electromagnetic waves. The missing component
is speed. The Churchlands’ thought
experiment slows down the waves to a range to which we humans no longer see
them as light. By trusting to our
intuitions in the thought experiment, we falsely conclude that rapid waves
cannot be light either. Similarly, the
Churchlands’ claim that Searle has slowed down the mental process to a range we
humans no longer think of it as ‘understanding’. Thus the Churchlands contend that the same
applies to Seale’s experiment and that if we were to meet the man from the CR
who seemed to converse intelligently in Chinese, but was really deploying
millions of memorised rules in a fraction of a second, it is not so clear that
we would deny he understood Chinese (Pinker, 1997, pgs 94-95).
In conclusion, in consideration of the narrow interpretation
of Searle’s claim that the mind is not a computer program I would accept his
argument to be valid and that the computer program itself does not ‘understand’
as we interpret that word. I found
Seale’s dismissal of the counter arguments to be reasonable, with exception of
Dennett’s and Churchlands’ arguments. The Churchlands’ thought experiment it is
based on a very simple premise and easy to understand and Dennett’s emergent
properties argument compelling. Speed
and complexity could be key factors in strong AI. What is undoubtedly true is that the ‘Turing
Test’ is no longer a sufficiently subtle evaluation of artificial intelligence.
Progresses in computing have advanced to
a stage whereby it is entirely possible to converse with a computer and believe
you are conversing with a human being.
The key issue in achieving a true computer ‘mind’ comes back to the
framing issue. This relates to the
ability to program algorithmically beyond the essentially mathematical and
logical tasks to consider such areas as intuition, belief or even love. These functions of the mind are frequently
considered irrational and illogical, but they are what make us human. The
ability or functionality to program ‘illogically’ is probably beyond
algorithmic programming of digital computers as we know them today. Equally Searle’s argument hinges on our
understanding of the semantics of language.
What do we mean by ‘understanding’ or ‘meaning’? Or finally, is it the limitation of English
as a language which is unable to provide an adequate explanation of differences
and similarities between the mind and artificial intelligence? Regardless, Searle’s valid dismissal of
strong AI will not slow the pace of computing development and the likely move
from the physical limitations of silicon-based computer architectures to
bio-computers which utilise biologically derived molecules to perform
computational processes. The future of
‘strong AI’ probably lies in the rapid development, dare I say growth, of such
bio-computers!
References
Wilkinson, RJ (2007) ‘Chapter 3 Monism (Conclusion) and
Artificial Intelligence (Beginning), Robert Wilkinson Minds and Bodies, 2007,
Open University Press.
Pinker, S (1997), Chapter 2, ‘Thinking Machines’, Steven
Pinker, How the Mind Works, W.W.Norton & Company Ltd., 10 Coptic Street,
London. WC1A 1PU
Copyright© John Tomany 2012