Home' Technology Review : July August 2010 Contents from the editor
technology review july/august 2010
from the editor
Ihave been thinking about risk.
As I write this column in early June, British Petroleum is still
struggling to contain its leaking well in the Gulf of Mexico. After
the company’s drilling rig, Deepwater Horizon, exploded on
April 20, as many as 19,000 barrels of oil (or as much as 800,000
gallons) spewed into the Gulf every day. A cap is now capturing
a little more than 400,000 gallons a day. It is the worst environ-
mental disaster in the history of the United States, but there may
be no solution until BP completes two “relief wells” in August.
In this issue of Technology Review, David Talbot writes about
the increasing incidence of cyber crime and espionage, and
the real (if still speculative) risk of outright cyber warfare. In
“Moore’s Outlaws” (p. 36), he quotes Stewart Baker, the former
general counsel of the National Security Agency and a for-
mer policy chief at the U.S. Department of Homeland Secu-
rity: “What we’ve been seeing, over the last decade or so, is that
Moore’s Law is working more for the bad guys than the good
guys. It’s really ‘Moore’s outlaws’ who are winning this fight.
Code is more complex, and that means more opportunity to
exploit the code. There is more money to be made in exploiting
the code, and that means there are more and more sophisticated
people looking to exploit vulnerabilities. If you look at things
like malware found, or attacks, or the size of the haul people are
pulling in, there is an exponential increase.”
Talbot describes experts’ concerns that computer viruses have
made millions of machines into “enslaved armies”—botnets—
awaiting instruction by malefactors. In the days leading up to
April 1, 2009, a worm called Conficker was expected to receive
an update from its unknown creator, but no one knew what: “A
tweak to Conficker’s code might cause the three million or so
machines ... to start attacking the servers of some company or
government network, vomit out billions of pieces of spam, or just
improve the worm’s own ability to propagate.” It’s scary stuff.
In the first case, a complex system of technologies (whose pur-
pose is to extract crude oil five miles under the ocean’s surface)
failed; in the second, a more complex system (a global computer
network whose purposes are incomprehensibly various, but
upon which our technological civilization depends) is failing.
These failures are not so much predictable as unsurprising. We
expanded our use of vulnerable technologies, because we were
dependent upon them. How should we think about the risks
inherent in technologies, particularly new technologies?
One possible intellectual tool, popular with environmental-
ists and policy makers, is the Precautionary Principle, which
states that when something is suspected of being harmful,
the burden of proof that the thing is not harmful rests with its
proponents. This principle has a stronger and a weaker for-
mulation. The stronger calls for regulation of any potentially
harmful activity, or refraining from action altogether, until
there is consensus that it is safe. The weaker does not demand
regulation or bans; it weighs costs against likelihoods. The for-
mer says, “Better safe than sorry”; the latter, “Be prepared.”
Although a handful of international agreements have
endorsed the strong formulation, and while it possesses a quasi-
legal status in European Union law, it is in fact seldom applied.
(A notable exception is in the management of global fisheries.)
Certainly, governments, corporations, and individuals rou-
tinely ignore it when thinking about new technologies. That’s
because the idea is “paralyzing,” according to Cass Sunstein, the
administrator of the White House Office of Information and
Regulatory Affairs (and for many years a professor of law at the
University of Chicago, where he wrote about behavioral eco-
nomics and risk). No one knows how new technologies will be
used in the future. There is never any consensus about risks. Cri-
ses accompany the development of any new, complex system,
but their exact form tends to take us by surprise.
But if the strong formulation of the Precautionary Principle
is paralyzing, the weak formulation is almost no help at all. It
provides little guidance for thinking about unlikely but poten-
tially catastrophic risks. We need an entirely new principle that
will guide our investment in precautionary technology. When a
technology fails or is unsustainable, we should be rationally con-
fident that a fix or alternative exists or will exist, because inge-
nious humans have devised other technologies that will mitigate
the crisis or stand in the outmoded technology’s place. Govern-
ment has a justified role in requiring such precautionary invest-
ment in new technologies.
In the absence of a new principle, we have mere optimism.
David Talbot’s feature, which accepts that we’re not likely to
build an entirely new, more secure Internet, describes research
into new technologies that may make our networks safer. For
now, perhaps that’s the best risk management we can expect.
Read about them, and write and tell me what you think at
firstname.lastname@example.org. —Jason Pontin
how should technologists think about precautions?
July10 EditorLetter 10
6/8/10 2:37:21 PM
Links Archive May June 2010 September October 2010 Navigation Previous Page Next Page