Text extracted via OCR from the original document. May contain errors from the scanning process.
change, and in which no stepwise, hierarchical, or abstract reasoning is necessary. Many
of the successes come not from a better understanding of the workings of intelligence but
from the brute-force power of faster chips and Bigger Data, which allow the programs to
be trained on millions of examples and generalize to similar new ones. Each system is an
idiot savant, with little ability to leap to problems it was not set up to solve, and a brittle
mastery of those it was. And to state the obvious, none of these programs has made a
move toward taking over the lab or enslaving its programmers.
Even if an artificial intelligence system tried to exercise a will to power, without
the cooperation of humans it would remain an impotent brain in a vat. A superintelligent
system, in its drive for self-improvement, would somehow have to build the faster
processors that it would run on, the infrastructure that feeds it, and the robotic effectors
that connect it to the world—all impossible unless its human victims worked to give it
control of vast portions of the engineered world. Of course, one can always imagine a
Doomsday Computer that is malevolent, universally empowered, always on, and
tamperproof. The way to deal with this threat is straightforward: Don’t build one.
What about the newer AI threat, the value-alignment problem, foreshadowed in
Wiener’s allusions to stories of the Monkey’s Paw, the genie, and King Midas, in which a
wisher rues the unforeseen side effects of his wish? The fear is that we might give an AI
system a goal and then helplessly stand by as it relentlessly and literal-mindedly
implemented its interpretation of that goal, the rest of our interests be damned. If we
gave an AI the goal of maintaining the water level behind a dam, it might flood a town,
not caring about the people who drowned. If we gave it the goal of making paper clips, it
might turn all the matter in the reachable universe into paper clips, including our
possessions and bodies. If we asked it to maximize human happiness, it might implant us
all with intravenous dopamine drips, or rewire our brains so we were happiest sitting in
jars, or, if it had been trained on the concept of happiness with pictures of smiling faces,
tile the galaxy with trillions of nanoscopic pictures of smiley-faces.
Fortunately, these scenarios are self-refuting. They depend on the premises that
(1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so
idiotic that they would give it control of the universe without testing how it works; and
(2) the AI would be so brilliant that it could figure out how to transmute elements and
rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of
misunderstanding. The ability to choose an action that best satisfies conflicting goals is
not an add-on to intelligence that engineers might forget to install and test; it is
intelligence. So is the ability to interpret the intentions of a language user in context.
When we put aside fantasies like digital megalomania, instant omniscience, and
perfect knowledge and control of every particle in the universe, artificial intelligence 1s
like any other technology. It is developed incrementally, designed to satisfy multiple
conditions, tested before it is implemented, and constantly tweaked for efficacy and
safety.
The last criterion is particularly significant. The culture of safety in advanced
societies is an example of the humanizing norms and feedback channels that Wiener
invoked as a potent causal force and advocated as a bulwark against the authoritarian or
exploitative implementation of technology. Whereas at the turn of the 20th century
Western societies tolerated shocking rates of mutilation and death in industrial, domestic,
and transportation accidents, over the course of the century the value of human life
82
HOUSE_OVERSIGHT_016885