Home' Technology Review : July August 2007 Contents 66 ESSAY
TECHNOLOGY REVIEW /
Of course, we can t know literally what it s like to be a
computer executing a long sequence of instructions. But
we know what it s like to be a human doing the same.
Imagine holding a deck of cards. You sort the deck; then
you shu e it and sort it again. Repeat the procedure,
ad in nitum. You are doing comparisons (which card
comes rst?), data movement (slip one card in front of
another), and so on. To know what it s like to be a com-
puter running a sophisticated AI application, sit down
and sort cards all afternoon. That s what it s like.
If you sort cards long enough and fast enough, will
a brand-new conscious mind (somehow) be created?
This is, in e ect, what cognitivists believe. They say
that when a computer executes the right combination of
primitive instructions in the right way, a new conscious
mind will emerge. So when a person executes the right
combination of primitive instructions in the right way,
a new conscious mind should (also) emerge; there s no
operation a computer can do that a person can t.
Of course, humans are radically slower than com-
puters. Cognitivists argue that sure, you know what
executing low-level instr uctions slowly is like; but only
when you do them very fast is it possible to create a
new conscious mind. Sometimes, a radical change in
execution speed does change the qualitative outcome.
(When you look at a movie frame by frame, no illusion
of motion results. View the frames in rapid succession,
and the outcome is di erent.) Yet it seems arbitrary to
the point of absurdity to insist that doing many primi-
tive operations very fast could produce consciousness.
Why should it? Why would it? How could it? What
makes such a prediction even remotely plausible?
But even if researchers could make a conscious mind
out of software, it wouldn t do them much good.
Suppose you could build a conscious software
mind. Some cognitivists believe that such a mind,
all by itself, is AI s goal. Indeed, this is the message
of the Turing test. A computer can pass Turing s test
without ever mingling with human beings.
But such a mind could communicate with human
beings only in a drastically super cial way.
It would be capable of feeling emotion in principle.
But we feel emotions with our whole bodies, not just
our minds; and it has no body. (Of course, we could
say, then build it a humanlike body! But that is a large
assignment and poses bioengineering problems far
beyond and outside AI. Or we could build our new
mind a body unlike a human one. But in that case we
couldn t expect its emotions to be like ours, or to estab-
lish a common ground for communication.)
Consider the low-energy listlessness that accom-
panies melancholy, the over owing jump-for-joy
sensation that goes with elation, the pounding heart
associated with anxiety or fear, the relaxed calm when
we are happy, the obvious physical manifestations of
excitement---and other examples, from rage to panic
to pity to hunger, thirst, tiredness, and other condi-
tions that are equally emotions and bodily states. In all
these cases, your mind and body for m an integrated
whole. No mind that lacked a body like yours could
experience these emotions the way you do.
No such mind could even grasp the word "itch."
In fact, even if we achieved the bioengineering mar-
vel of a synthetic human body, our problems wouldn t
be over. Unless this body experienced infancy, child-
hood, and adolescence, as humans do---unless it could
grow up, as a member of human society---how could it
understand what it means to "feel like a kid in a candy
shop" or to "wish I were 16 again"? How could it
grasp the human condition in its most basic sense?
A mind-in-a-box, with no body of any sort, could
triumphantly pass the Turing test---which is one index
of the test s super ciality. Communication with such
a contrivance would be more like a parody of con-
versation than the real thing. (Even in random Inter-
net chatter, all parties know what it s like to itch, and
scratch, and eat, and be a child.) Imagine talking to
someone who happens to be as articulate as an adult
but has less experience than a six-week-old infant.
Such a "conscious mind" has no advantage, in itself,
over a mere unconscious intelligence.
But there s a solution to these problems. Suppose
we set aside the gigantic chore of building a synthetic
human body and make do with a mind-in-a-box or a
mind-in-an-anthropoid-robot, equipped with video
cameras and other sensors---a rough approximation of
a human body. Now we choose some person (say, Joe,
age 35) and simply copy all his memories and trans-
fer them into our software mind. Problem solved. (Of
course, we don t know how to do this; not only do we
need a complete transcription of Joe s memories, we
need to translate them from the neural form they take
in Joe s brain to the software for m that our software
mind understands. These are hard, unsolved problems.
But no doubt we will solve them someday.)
Nonetheless: understand the enormous ethical bur-
den we have now assumed. Our software mind is con-
scious (by assumption) just as a human being is; it can
feel pleasure and pain, happiness and sadness, ecstasy
and misery. Once we ve transferred Joe s memories
into this arti cial yet conscious being, it can remember
what it was like to have a human body---to feel spring
rain, stroke someone s face, drink when it was thirsty,
rest when its muscles were tired, and so forth. (Bodies
Links Archive May June 2007 September October 2007 Navigation Previous Page Next Page