Home' Technology Review : July August 2014 Contents 62
MIT TECHNOLOGY REVIEW
regions, all the sensory regions, all the
motor control regions—we predicted
they would be there. But the social
brain was not predicted at all. It just
emerged. That was wild.
In the last 10 years [we’ve been]
trying to refine our interpretation
about what information is in those
brain regions, how they interact
with one another, how they develop,
whether or not those brain regions do
have anything to do with autism.
CH: And do these regions indeed not func-
tion well in people with autism?
RS: That was the original hypothesis that
we went after. Maybe [people with
autism] are trying to solve social prob-
lems with the machinery we would use
for other problems, instead of having
the dedicated machinery. There is no
evidence that that is right. Too bad,
because I like that idea. Autism has
turned out to be a much, much harder
problem at every level of analysis than
I think anyone expected. Ten years
ago people thought that cognitively,
neurally, genetically, autism would be
crackable. Now it looks like maybe
there are thousands of genetic varia-
tions of autism.
CH: How might your work help lead to more
socially capable computers?
RS: To me, the signature of human social
cognition is the same thing that makes
good old-fashioned AI hard, which is
its generativity. We can recognize and
think about and reason through a liter-
ally infinite set of situations and goals
and human minds. And yet we have a
very particular and finite machinery to
do that. So what are the right ingredi-
ents? If we know what those are, then
we can try to understand how the com-
binations of those ingredients generate
this massively productive, infinitely
generalizable human capacity.
CH: What do you mean by
RS: Let’s say you hear about
a friend of yours. She
was told she was being
called to her boss’s office,
and she thought she was
finally getting the promo-
tion she’d been waiting for.
But it turned out she actu-
ally got fired. Let’s say the
next day you see her com-
ing down the street and
she has a huge smile on her face. Prob-
ably not what you had expected, right?
You take that and you build a
whole interior world. Maybe it’s a fake
smile and she’s putting on a brave face.
Maybe she’s relieved because now she
can move to the other side of the con-
tinent and live with her boyfriend.
You need to figure out: What were
her goals? What did she want? What
changed her mind? There are all kinds
of features of that story that you were
able to extract in the moment.
If a computer could extract [such]
features, we could [improve its ability
to do] sentiment analysis. There’s a
huge focus in AI right now on trying
to take the natural language people use
and figure out: Did they like or not like
that thing? Did they like that restau-
rant or not like that restaurant? Now
take it up to the level of distinguishing
between language when you feel disap-
pointed, lonely, or terrified. That’s the
kind of problem that we want to solve.
CH: How can computers learn to do that?
RS: You need to translate those words into
more abstract things—goals, desires,
plans. My colleague Josh Tenenbaum
and I have been working for years just
to build a kind of mathematical rep-
resentation of what it means to think
of somebody as having a plan or a
goal, such that this model can pre-
dict human judgments about the per-
son’s goal in a really simple context.
What do you need to know about a
goal? We’re trying to build models that
describe that knowledge.
CH: That’s very different from having a
computer look at millions of examples
to find patterns.
RS: Exactly. This is not big data; it’s trying
to describe the structure of the knowl-
edge. That’s always been viewed as an
opposition: the people who want big-
ger data sets and the people who want
the right knowledge structures. My
impression right now is that there’s a
lot more intermediate ground. What
used to be viewed as opposite tradi-
tions in AI should now be viewed as
complementary, where you try to fig-
ure out probabilistic representations
that learn from data.
CH: But the prospect of replicating social
cognition in a computer seems far off,
right? We don’t yet understand how
the brain does it.
RS: It feels pretty plausible that the full
understanding is not in the grasp of
me in my lifetime, and that’s good,
because it means I have a lot of work
to do. So in the meantime, I do what-
ever seems likely to produce a little bit
of instrumental progress toward that
“There’s a huge focus in
AI right now on trying to
take the natural language
people use and figure out
[whether they like some-
thing]. Now take it up to
the level of distinguishing
between language when
you feel disappointed,
lonely, or terrified. That’s
the kind of problem that
we want to solve.”
6/3/14 2:33 PM
Links Archive May June 2014 Navigation Previous Page Next Page