Home' Technology Review : May June 2011 Contents Reviews
technology review May/June 2011
Watching the computer system known
as Watson defeat the top two human
Jeopardy! players of all time was fun in the
short term. This demonstration of IBM's
software, however, was a bad idea in the
longer term. It presented a misleading pic-
ture to the public of what is known about
machine and human intelligence, and more
seriously, it advanced a flawed approach to
science that stands to benefit the enemies
There's a crucial distinction
to make right away. My purpose
is not to criticize the work done
by the team that created Watson.
Nor do I want to critique their professional
publications or their interactions with col-
leagues in the field of computer science.
Instead, I am concerned with the nature
of the pop spectacle hatched by IBM.
Why was there a public spectacle at all?
Certainly it's worthwhile to share the joy
and excitement of science with the public,
as NASA often does. But there were no other
Mars rovers to compare with the NASA rov-
ers when they landed, and there is a whole
world of research related to artificial intel-
ligence. By putting its system on TV and
personifying that system with a name and
a computer-generated voice, IBM separated
it from its context, suggesting---falsely---the
existence of a sui generis entity.
Contrast IBM's theatrics with the intro-
duction of Wolfram Alpha, a "knowledge
engine" for the Web that physicist Stephen
Wolfram released in 2009 (see "Search Me,"
July/August 2009). Although the early rhet-
oric around Alpha was a touch extreme,
sometimes exaggerating its natural-language
competence, the method of introduction
was vastly more honest. Wolfram
Research didn't resort to stage
magic: Alpha was made available
online for people to try. Stephen
Wolfram encouraged people to
use his technology and compare the results
with those generated by search engines like
Google. Alpha proved honestly that it was
something fresh, di erent, and useful. Com-
parison with what came before is crucial to
progress in science and technology.
But Watson was presented on TV as an
entity instead of a technology, and people
are inclined to treat entities charitably. You
are more likely to give a "he" the benefit of
a doubt, while you judge an "it" for what it
can do as a tool. Watson avoided any such
comparative judgment, and the public wasn't
given a window into what would happen
in that kind of empirical process. Stephen
Wolfram himself, however, went to the trou-
ble of writing a blog post comparing Watson
with everyday search engines. He entered
the text of Jeopardy! clues into those search
engines and found that in many cases, the
first document they returned contained the
answer. Identifying a page that contains
the answer is not the same thing as being
able to give the answer on Jeopardy!, but
this little experiment does indicate that
Watson's abilities were less extraordinary
than one might have gathered from watch-
ing the broadcast.
Wouldn't it have been better to open the
legitimate process of science to the public
instead of staging a fake version? An example
of how to do this was the DARPA-sponsored
"Grand Challenge" to create self-driving cars.
By pitting technologies against each other,
DARPA informed the public well and o ered
a glimpse into the state of the art. The contest
also made for great TV. Competitors were
motivated. The process worked.
The Jeopardy! show in itself, by contrast,
was not informative. There are a multitude
of open questions about how human lan-
guage works and how brains think. But
when machines are pitted against people,
an unstated assertion is inevitably propa-
gated: that human thinking and machine
"intelligence" are already known to be at
least comparable. Of course, this is not true.
In the case of Jeopardy!, the game's design
isolates a specific skill: guessing words on
the basis of hints. We know that being able
to guess an unstated word from its context is
part of language competency, but we don't
know how important that skill is in relation to
the whole phenomenon of human language.
It's Not a Game
Putting a computer on Jeopardy! warps the public understanding
of what artificial intelligence is and how science is done.
By JARON LANIER
Links Archive March April 2011 July August 2011 Navigation Previous Page Next Page