Home' Technology Review : January February 2009 Contents FEATURE STORY
TECHNOLOGY REVIEW JANUARY/ FEBRUARY
processor switches between activities more quickly than you can
comprehend. The easiest way to use multiple cores has thus been
through a division of labor---for example, running the operating
system on one core and an application on another. That doesn't
require a whole new programming model, and it may work for
today's chips, which have two or four cores. But what about tomor-
row's, which may have 64 cores or more?
REVISITING OLD WORK
Fortunately, says Leslie Valiant, a professor of computer science
and applied mathematics at Harvard University, the fundamentals
of parallelism were worked out decades ago in the field of high-
performance computing---which is to say, with supercomputers.
"The challenge now," says Valiant, "is to find a way to make that
old work useful."
The supercomputers that inspired multicore computing were
second-generation devices of the 1980s, made by companies like
Thinking Machines and Kendall Square Research. Those comput-
ers used o -the-shelf processors by the hundreds or even thou-
sands, running them in parallel. Some were commissioned by the
U.S. Defense Advanced Research Projects Agency as a cheaper
alternative to Cray supercomputers. The lessons learned in pro-
gramming these computers are a guide to making multicore pro-
gramming work today. So Grand Theft Auto might soon benefit
from software research done two decades ago to aid the design
of hydrogen bombs.
In the 1980s, it became clear that the key problem of parallel
computing is this: it's hard to tear software apart, so that it can
be processed in parallel by hundreds of processors, and then put
it back together in the proper sequence without allowing the
intended result to be corrupted or lost. Computer scientists dis-
covered that while some problems could easily be parallelized,
others could not. Even when problems could be parallelized, the
results might still be returned out of order, in what was called a
"race condition." Imagine two operations running in parallel, one
of which needs to finish before the other for the overall result to be
correct. How do you ensure that the right one wins the race? Now
imagine two thousand or two million such processes.
"What we learned from this earlier work in high-performance
computing is that there are problems that lend themselves to
parallelism, but that parallel applications are not easy to write,"
says Marc Snir, codirector of the Universal Parallel Computing
Research Center (UPCRC) at the University of Illinois at Urbana-
Champaign. Normally, programmers use specialized program-
ming languages and tools to write instructions for the computer
in terms that are easier for humans to understand than the s and
s of binary code. But those languages were designed to represent
linear sequences of operations; it's hard to organize thousands of
parallel processes through a linear series of commands. To create
parallel programs from scratch, what's needed are languages that
allow programmers to write code without thinking about how to
make it parallel---to program as usual while the software figures
out how to distribute the instructions e ectively across processors.
"There aren't good tools yet to hide the parallelism or to make it
obvious [how to achieve it]," Snir says.
To help solve such problems, companies have called back to
service some graybeards of 1980s supercomputing. David Kuck,
for example, is a University of Illinois professor emeritus well
known as a developer of tools for parallel programming. Now
he works on multicore programming for Intel. So does an entire
team hired from the former Digital Equipment Corporation; in a
previous professional life, it developed Digital's implementation
of the message passing interface (MPI), the dominant software
standard for multimachine supercomputing today.
In one sense, these old players have it easier than they did the
last time around. That's because many of today's multicore appli-
cations are very di erent from those imagined by the legendary
mainframe designer Gene Amdahl, who theorized that the gain
in speed achievable by using multiple processors was limited by
the degree to which a given program could be parallelized.
Computers are handling larger volumes of data than ever before,
but their processing tasks are so ideally suited to parallelization
that the constraints of Amdahl's Law---described in 1967---are
beginning to feel like no constraints at all. The simplest example
of a massively parallel task is the brute-force determination of an
unknown password by trying all possible character combinations.
Dividing the potential solutions among 1,000 processors can't help
but be 1,000 times faster. The same goes for today's processor-
intensive applications for encoding video and audio data. Com-
pressing movie frames in parallel is almost perfectly e cient. But
if parallel processing is easier to find uses for today, it's not neces-
sarily much easier to do. Making it easier will require a concerted
BRIGHT LIGHTS In 1987, Thinking Machines released its CM-2 super-
computer (above), in which 64,000 processors ran in parallel. The company
declared bankruptcy in 1994, but its impact on computing was significant.
©THINKING MACHINES CORPORATION, 1987. PHOTO: STEVE GROHE
Links Archive March April 2009 November December 2008 Navigation Previous Page Next Page