Skip to main content
Skip to content
Case File
d-22521House OversightOther

AI risk commentary with historical analogies, no actionable leads

The passage consists of generic arguments about superintelligent AI risks and historical anecdotes. It contains no specific names, transactions, dates tied to wrongdoing, or actionable investigative l Discusses common objections to AI risk arguments. References historical figures (Rutherford, Szilard) as analogies. Cites AI researchers and reports without new allegations.

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016834
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The passage consists of generic arguments about superintelligent AI risks and historical anecdotes. It contains no specific names, transactions, dates tied to wrongdoing, or actionable investigative l Discusses common objections to AI risk arguments. References historical figures (Rutherford, Szilard) as analogies. Cites AI researchers and reports without new allegations.

Tags

risk-assessmentai-safetyhouse-oversightpublic-discourse

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
imperfectly specified objectives conflicting with our own—whose motivation to preserve their existence in order to achieve those objectives may be insuperable. 1001 Reasons to Pay No Attention Objections have been raised to these arguments, primarily by researchers within the AI community. The objections reflect a natural defensive reaction, coupled perhaps with a lack of imagination about what a superintelligent machine could do. None hold water on closer examination. Here are some of the more common ones: e Don’t worry, we can just switch it off? This is often the first thing that pops into a layperson’s head when considering risks from superintelligent Al—as if a superintelligent entity would never think of that. This is rather like saying that the risk of losing to DeepBlue or AlphaGo is negligible—all one has to do is make the right moves. e Human-level or superhuman AI is impossible This is an unusual claim for AI researchers to make, given that, from Turing onward, they have been fending off such claims from philosophers and mathematicians. The claim, which is backed by no evidence, appears to concede that if superintelligent AI were possible, it would be a significant risk. It’s as if a bus driver, with all of humanity as passengers, said, “Yes, I am driving toward a cliff—in fact, I’m pressing the pedal to the metal! But trust me, we’ll run out of gas before we get there!” The claim represents a foolhardy bet against human ingenuity. We have made such bets before and lost. On September 11, 1933, renowned physicist Ernest Rutherford stated, with utter confidence, “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” On September 12, 1933, Leo Szilard invented the neutron-induced nuclear chain reaction. A few years later he demonstrated such a reaction in his laboratory at Columbia University. As he recalled in a memoir: “We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.” e It’s too soon to worry about it. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how much time is needed to devise and implement a solution that avoids the risk. For example, if we were to detect a large asteroid predicted to collide with the Earth in 2067, would we say, “It’s too soon to worry”? And if we consider the global catastrophic risks from climate change predicted to occur later in this century, is it too soon to take action to prevent them? On the contrary, it may be too late. The relevant timescale for human-level AI is less predictable, but, like nuclear fission, it might arrive considerably sooner than expected. One variation on this argument is Andrew Ng’s statement that it’s “like worrying about overpopulation on Mars.” This appeals to a convenient analogy: Not only is the ? AI researcher Jeff Hawkins, for example, writes, “Some intelligent machines will be virtual, meaning they will exist and act solely within computer networks. .. . It is always possible to turn off a computer network, even if painful.” https:/Avww.recode.net/2015/3/2/11559576/. 3 The AI100 report (Peter Stone et al.), sponsored by Stanford University, includes the following: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.” https://ail00.stanford.edu/20 16-report. 31

Technical Artifacts (2)

View in Artifacts Browser

Email addresses, URLs, phone numbers, and other technical indicators extracted from this document.

Domainavww.recode.net
URLhttps://ail00.stanford.edu/20

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.