Skip to main content
Skip to content
Case File
d-17891House OversightOther

Stuart Russell essay on AI risk and historical perspectives

The text is an academic commentary on AI ethics with no specific allegations, names, transactions, or actionable investigative leads involving powerful actors. Quotes Norbert Wiener on machine control and elite use. References concerns of AI superintelligence cited by Elon Musk, Bill Gates, Stephen Hawking, Nick Bo Discusses the need to align machine purpose with human values.

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016249
Pages
1
Persons
3
Integrity
No Hash Available

Summary

The text is an academic commentary on AI ethics with no specific allegations, names, transactions, or actionable investigative leads involving powerful actors. Quotes Norbert Wiener on machine control and elite use. References concerns of AI superintelligence cited by Elon Musk, Bill Gates, Stephen Hawking, Nick Bo Discusses the need to align machine purpose with human values.

Tags

ai-ethicsphilosophy-of-technologymachine-alignmenthouse-oversight

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
THE PURPOSE PUT INTO THE MACHINE Stuart Russell Stuart Russell is a professor of computer science and Smith-Zadeh Professor in Engineering at UC Berkeley. He is the coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Among the many issues raised in Norbert Wiener’s The Human Use of Human Beings (1950) that are currently relevant, the most significant to the AI researcher is the possibility that humanity may cede control over its destiny to machines. Wiener considered the machines of the near future as far too limited to exert global control, imagining instead that machines and machine-like control systems would be wielded by human elites to reduce the great mass of humanity to the status of “cogs and levers and rods.” Looking further ahead, he pointed to the difficulty of correctly specifying objectives for highly capable machines, noting a few of the simpler and more obvious truths of life, such as that when a djinnee is found in a bottle, it had better be left there; that the fisherman who craves a boon from heaven too many times on behalf of his wife will end up exactly where he started; that if you are given three wishes, you must be very careful what you wish for. The dangers are clear enough: Woe to us if we let [the machine] decide our conduct, unless we have previously examined the laws of its action, and know fully that its conduct will be carried out on principles acceptable to us! On the other hand, the machine like the djinnee, which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. Ten years later, after seeing Arthur Samuel’s checker-playing program learn to play checkers far better than its creator, Wiener published “Some Moral and Technical Consequences of Automation” in Science. In this paper, the message is even clearer: If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire. . . . In my view, this is the source of the existential risk from superintelligent AI cited in recent years by such observers as Elon Musk, Bill Gates, Stephen Hawking, and Nick Bostrom. Putting Purposes Into Machines The goal of AI research has been to understand the principles underlying intelligent behavior and to build those principles into machines that can then exhibit such behavior. In the 1960s and 1970s, the prevailing theoretical notion of intelligence was the capacity for logical reasoning, including the ability to derive plans of action guaranteed to achieve a specified goal. More recently, a consensus has emerged around the idea of a rational 29

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.