Text extracted via OCR from the original document. May contain errors from the scanning process.
In the spring of 1993, the research arm of NASA organized a conference on the
frontiers of knowledge and invited the most eclectic group of thinkers they could
find. Biologists, sociologists and computer designers gathered for the three-day
meeting in the unpromising setting of Westlake, Ohio. The mimeographed notes of
the conference became legendary and still circulate, a sort of Shroud of Turin for the
machine learning set. The introduction features a poem pecked out in IBM type
titled “Into The Era of Cyberspace,” written with all the pocket-protector fluidity one
might expect of a NASA engineer: “Our robots precede us/with infinite
diversity/exploring the universe/delighting in complexity.” (Turing’s rhyming
computer, you have to suspect, could have done better.)?°+ One of the first speakers
at the conference was a San Diego State University professor named Vernor Vinge,
whose remarks that day marked the start of an important era in our consideration
of smart machines. The Coming Technological Singularity: How to Survive the Post-
Human Era his talk was called. “Within thirty years,” Vinge began, “we will have the
technological ability to create superhuman intelligence. Shortly after, the human era
will be ended.”26
Vinge’s aim was not - or at least not merely - to tell a room full of NASA geeks who
had been dreaming of life on another planet that life on our own planet might soon
be replaced by whirring, calculating machines. Rather, he explained, he wanted to
plot what a world of not simply intelligent, but intuitive machines might look like.
Far from disappearing, Vinge thought AI would produce a sort of wisdom that would
be inscrutable to humans. And this wisdom, buffed to perfection by high-speed
judgment and endless data, would eventually and sensibly take over much of human
activity. Real “Al”, Vinge said, would at the very least be used to design a world of
quicker AI that would, in turn, yield to still-faster generations. “When greater-than-
human intelligence drives progress,” Vinge explained, “that progress will be much
more rapid. In fact, there seems no reason why progress itself would not involve the
creation of still more intelligent entities — on a still shorter time scale.”
Vinge reminded his audience of a moment once described by the British
mathematician I.J. Good, who'd cracked codes in Bletchley Park alongside Alan
Turing during World War Two: “Let an ultraintelligent machine be defined as a
machine that can far surpass all the intellectual activities of any man, no matter how
clever,” Good had written. “Since the design of machines is one of these actual
activities, an ultraintelligent machine could design even better machines; there
would then unquestionably be an ‘intelligence explosion,’ and the intelligence of
man would be left far behind. Thus the first ultraintelligent machine is the last
invention that man need ever make, provided that the machine is docile enough to
tell us how to keep it under control.” Vinge labeled this instant “The Singularity”: “Tt
is a point,” he wrote, “where our old models must be discarded.” The trivial version
of this would be an age of autonomous armed drones, self-driving cars and electrical
264 In the Spring of 1993: See “Vision-21: Interdisciplinary Science and Engineering
in the Era of Cyberspace”, NASA Conference Publication 10129, Proceedings of
NASA Lewis Research Center Conference, Westlake, Ohio March 30-31, 1993 p. iii
265 “Within thirty years”: See Vinge in “Vision-21” above p. 12
193
HOUSE_OVERSIGHT_018425