Home' Technology Review : July August 2007 Contents TECHNOLOGY REVIEW /
Machine"---seemed at their best to be thinking. Also,
computers are the characteristic technology of the age.
It is only natural to ask how far we can push them.
Then there s a more fundamental reason why AI
cares speci cally about digital computers: computa-
tion underlies today s most widely accepted view of
mind. (The leading technology of the day is often
pressed into ser vice as a source of ideas.)
The ideas of the philosopher Jerry Fodor make
him neither strictly cognitivist nor anticognitivist. In
The Mind Doesn t Work That Way (2000), he dis-
cusses what he calls the "New Synthesis"---a broadly
accepted view of the mind that places AI and cogni-
tivism against a biological and Dar winian backdrop.
"The key idea of New Synthesis psychology," writes
Fodor, "is that cognitive processes are computational.
... A computation, according to this understanding, is
a formal operation on syntactically structured repre-
sentations." That is, thought processes depend on the
form, not the meaning, of the items they work on.
In other words, the mind is like a factory machine
in a 1940s cartoon, which might grab a metal plate and
drill two holes in it, ip it over and drill three more,
ip it sideways and glue on a label, spin it around ve
times, and shoot it onto a stack. The machine doesn t
"know" what it s doing. Neither does the mind.
Likewise computers. A computer can add numbers
but has no idea what "add" means, what a "number"
is, or what "arithmetic" is for. Its actions are based on
shapes, not meanings. According to the New Synthe-
sis, writes Fodor, "the mind is a computer."
But if so, then a computer can be a mind, can be
a conscious mind---if we supply the right software.
Here s where the trouble starts. Consciousness is nec-
essarily subjective: you alone are aware of the sights,
sounds, feels, smells, and tastes that ash past "inside
your head." This subjectivity of mind has an important
consequence: there is no objective way to tell whether
some entity is conscious. We can only guess, not test.
Granted, we know our fellow humans are conscious;
but how? Not by testing them! You know the person
next to you is conscious because he is human. You re
human, and you re conscious---which moreover seems
fundamental to your humanness. Since your neighbor
is also human, he must be conscious too.
So how will we know whether a computer run-
ning fancy AI software is conscious? Only by trying
to imagine what it s like to be that computer; we must
try to see inside its head.
Which is clearly impossible. For one thing, it
doesn t have a head. But a thought experiment may
give us a useful way to address the problem. The
"Chinese Room" argument, proposed in 1980 by
John Searle, a philosophy professor at the University
of California, Berkeley, is intended to show that no
computer r unning software could possibly manifest
understanding or be conscious. It has been contro-
versial since it rst appeared. I believe that Searle s
argument is absolutely right---though more elaborate
and oblique than necessary.
Searle asks us to imagine a program that can pass
a Chinese Turing test---and is accordingly uent in
Chinese. Now, someone who knows English but no
Chinese, such as Searle himself, is shut up in a room.
He takes the Chinese-understanding software with
him; he can execute it by hand, if he likes.
Imagine "conversing" with this room by sliding
questions under the door; the room returns written
answers. It seems equally uent in English and Chi-
nese. But actually, there is no understanding of Chinese
inside the room. Searle handles English questions by
relying on his knowledge of English, but to deal with
Chinese, he executes an elaborate set of simple instruc-
tions mechanically. We conclude that to behave as if
you understand Chinese doesn t mean you do.
But we don t need complex thought experiments
to conclude that a conscious computer is ridiculously
unlikely. We just need to tackle this question: What is it
like to be a computer running a complex AI program?
Well, what does a computer do? It executes
"machine instr uctions"---low-level operations like
arithmetic (add two numbers), comparisons (which
number is larger?), "branches" (if an addition yields
zero, continue at instr uction 200), data movement
(transfer a number from one place to another in mem-
ory), and so on. Everything computers accomplish is
built out of these primitive instructions.
So what is it like to be a computer running a com-
plex AI program? Exactly like being a computer r un-
ning any other kind of program.
Computers don t know or care what instructions
they are executing. They deal with outward forms, not
meanings. Switching applications changes the output,
but those changes have meaning only to humans. Con-
sciousness, however, doesn t depend on how anyone
else interprets your actions; it depends on what you
yourself are aware of. And the computer is merely a
machine doing what it s supposed to do---like a clock
ticking, an electric motor spinning, an oven baking.
The oven doesn t care what it s baking, or the com-
puter what it s computing.
The computer s routine never varies: grab an
instruction from memory and execute it; repeat until
something makes you stop.
Links Archive May June 2007 September October 2007 Navigation Previous Page Next Page