Skip to main content
Skip to content
Case File
d-23265House OversightOther

Debate Over AI Risks and Mischaracterizations of Luddite Accusations

The passage is a commentary on AI risk discourse, quoting experts and awards, but contains no concrete allegations, financial flows, or links to powerful actors that would merit investigative follow‑u References to AI100 report dismissing imminent AI threat. Quotes Nick Bostrom on AI timelines and risks. Mentions Elon Musk, Stephen Hawking, and a satirical "Luddite of the Year" award.

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016835
Pages
1
Persons
2
Integrity
No Hash Available

Summary

The passage is a commentary on AI risk discourse, quoting experts and awards, but contains no concrete allegations, financial flows, or links to powerful actors that would merit investigative follow‑u References to AI100 report dismissing imminent AI threat. Quotes Nick Bostrom on AI timelines and risks. Mentions Elon Musk, Stephen Hawking, and a satirical "Luddite of the Year" award.

Tags

public-discoursetechnology-policyai-safetyhouse-oversightexpert-commentary

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
risk easily managed and far in the future, but also it’s extremely unlikely that we'd even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever-more-capable AI systems. A more apt analogy would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we'd arrived. e Human-level Al isn’t really imminent, in any case. The AI100 report, for example, assures us, “Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind.” This argument simply misstates the reasons for concern, which are not predicated on imminence. In his 2014 book, Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, for one, writes, “It 1s no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.” e You're just a Luddite. It’s an odd definition of Luddite that includes Turing, Wiener, Minsky, Musk, and Gates, who rank among the most prominent contributors to technological progress in the 20th and 21st centuries.* Furthermore, the epithet represents a complete misunderstanding of the nature of the concerns raised and the purpose for raising them. It is as if one were to accuse nuclear engineers of Luddism if they pointed out the need for control of the fission reaction. Some objectors also use the term “anti-AI,” which is rather like calling nuclear engineers “anti-physics.” The purpose of understanding and preventing the risks of AI is to ensure that we can realize the benefits. Bostrom, for example, writes that success in controlling AI will result in “a civilizational trajectory that leads to a compassionate and jubilant use of humanity’s cosmic endowment”—hardly a pessimistic prediction. e Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives. (Often, the argument adds the premise that people of greater intelligence tend to have more altruistic objectives, a view that may be related to the self-conception of those making the argument.) This argument is related to Hume’s is-ought problem and G. E. Moore’s naturalistic fallacy, suggesting that somehow the machine, as a result of its intelligence, will simply perceive what is right, given its experience of the world. This is implausible; for example, one cannot perceive, in the design of a chessboard and chess pieces, the goal of checkmate; the same chessboard and pieces can be used for suicide chess, or indeed many other games still to be invented. Put another way: Where Bostrom imagines humans driven extinct by a putative robot that turns the planet into a sea of paper clips, we humans see this outcome as tragic, 4 Elon Musk, Stephen Hawking, and others (including, apparently, the author) received the 2015 Luddite of the Year Award from the Information Technology Innovation Foundation: https:/Atif.org/publications/20 16/0 1/19/artificial-intelligence-alarmists-win-itif/oE2%80%9 9s-annual- luddite-award. > Rodney Brooks, for example, asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.” http://rodneybrooks.com/the-seven-deadly -sins-of-predicting-the-future -of-ai/. 32

Technical Artifacts (2)

View in Artifacts Browser

Email addresses, URLs, phone numbers, and other technical indicators extracted from this document.

Domainatif.org
URLhttp://rodneybrooks.com/the-seven-deadly

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.