Home' Technology Review : November December 2006 Contents Q&A
TECHNOLOGY REVIEW /
In 1982, when he was still a stu-
dent at MIT, Danny Hillis
cofounded Thinking Machines,
one of the most famous failures in
the history of computing. A hive of
wayward and brilliant researchers,
Thinking Machines tried to build the
world s rst arti cial intelligence.
But if the company did not succeed
in "building a machine that will be
proud of us" (its corporate motto), its
Connection Machine demonstrated
the practicality of parallel process-
ing, the foundation of modern super-
computing. Today, Danny Hillis is
cochair of Applied Minds, a design
and invention company, and he is
building the Clock of the Long Now,
a mechanical timepiece meant to last
TR: Why is creating an artificial
intelligence so difficult?
Hillis: We look to our own minds
and watch our patterns of conscious
thought, reasoning, planning, and
making analogies, and we think,
"That s thinking." Actually, it s just the
tip of a very deep iceberg. When early
AI researchers began, they assumed
that hard problems were things like
playing chess and passing calculus
exams. That stu tur ned out to be
easy. But the types of thinking that
seemed e ortless, like recognizing a
face or noticing what is important in a
story, turned out to be very, very hard.
Why did Thinking Machines fail
to create a thinking machine?
Well, the glib answer is that we just
didn t have enough time. But enough
time would have been decades,
maybe lifetimes. It is a hard prob-
lem, probably many hard problems,
and we don t really know how to
solve them. We still have no real sci-
enti c answer to "What is a mind?"
The Connection Machine was an
effective platform for supercomputing.
Why didn t Thinking Machines prosper
as a supercomputing company?
Supercomputing turned out to be a
technology, not a business. My friend
Nathan Myhr vold, who was r un-
ning Microsoft Research at the time,
once told me, "It is at least as hard to
make software for a supercomputer
thousand customers, and we have bil-
lions. Not only that, but each of those
customers actually expects you to
give them exactly what they need."
What were the successful
commercial applications of the
research at Thinking Machines?
The commercial applications were
mostly chip design, data mining, text
search, cryptology, computational
chemistry, computer graphics, nan-
cial optimization, seismic processing,
and uid ow modeling. Scienti c
applications like astronomy, climate
modeling, or quantum chromodynam-
ics were exciting when they helped get
a result on the cover of Nature, but we
never made money on them.
What happened to the patents from
Thinking Machines? More than anyone
else, you are responsible for massive
parallel processing. You get credit, but
no payment. Who gets it, and why?
Well, rst of all, I should be clear
that I am just one of many people who
contributed to developing massively
parallel computing. As for the patents,
one of the consequences of Think-
ing Machines failure is that I lost any
rights to the technologies. In retro-
spect, that tur ned out to be a blessing,
because it saved me from spending
the next decade of my life in court.
How is your philosophy of artificial
intelligence different from Marvin
Minsky s famous "society of mind"?
Mar vin is my mentor, so any phi-
losophy of AI that I have starts with
his. I was living in his basement
while he was writing the book Soci-
ety of Mind, and every day he would
write a new page or two and let me
read it. Then we would get to talk
about it, and I would get to hear all
the thought that he had put behind it.
I still can t imagine what it would be
like to read that book, cover to cover,
without a long conversation on each
page. But that is the point of the book:
as Marvin would put it, "The brain
is a kludge." There are a lot of di er-
ent things going on, and they interact
in complicated ways. Marvin is surely
wrong on most of the details, but I
think the big picture of lots of di er-
ent, loosely coupled semiautonomous
processes is basically right.
You were ahead of your time in
applying computation to immunol-
ogy, genetics, and neurobiology.
Today, computation is ubiquitous
in biology. What will this mean?
I am excited that computational
biology is coming into its own. It
feels like the eld of computing did
in 1970. Everything seems possible,
and the only constraint is our imag-
ination. There are still so many
basic, simple questions that are
unanswered: "How are memories
encoded?" "How does the immune
system have a sense of self ?"
I am especially interested in what
will come of computational models
of evolution, although I have to admit
that the eld seems a bit stuck right
now. Most current models of evolu-
tion reduce it to a very weak kind of
search algorithm, but I have always
felt that there is something more to it
than that. It is not that the biologists
are wrong about the mechanisms, but
rather that the models are much sim-
pler than the biology. It may be that
the interaction of evolution and devel-
opment is the key, or behavior and
environment, or something like that.
Links Archive September October 2006 January February 2007 Navigation Previous Page Next Page