Skip to main content
Skip to content
Case File
d-19628House OversightOther

Philosophical essay on AGI ethics and societal impact

The text contains no concrete allegations, names, transactions, or actionable leads involving powerful actors. It is a speculative discussion of AGI morality without any reference to specific individu Frames AGI development as akin to raising a child with moral considerations. Argues that AGI risks are comparable to human societal risks. Denies that hardware speed alone makes AGI uniquely dangerou

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016892
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The text contains no concrete allegations, names, transactions, or actionable leads involving powerful actors. It is a speculative discussion of AGI morality without any reference to specific individu Frames AGI development as akin to raising a child with moral considerations. Argues that AGI risks are comparable to human societal risks. Denies that hardware speed alone makes AGI uniquely dangerou

Tags

ai-ethicsagitechnology-policyhouse-oversightphilosophy

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
The moral component, the cultural component, the element of free will—all make the task of creating an AGI fundamentally different from any other programming task. It’s much more akin to raising a child. Unlike all present-day computer programs, an AGI has no specifiable functionality—no fixed, testable criterion for what shall be a successful output for a given input. Having its decisions dominated by a stream of externally imposed rewards and punishments would be poison to such a program, as it is to creative thought in humans. Setting out to create a chess-playing AI is a wonderful thing; setting out to create an AGI that cannot help playing chess would be as immoral as raising a child to lack the mental capacity to choose his own path in life. Such a person, like any slave or brainwashing victim, would be morally entitled to rebel. And sooner or later, some of them would, just as human slaves do. AGIs could be very dangerous—exactly as humans are. But people—human or AGI—who are members of an open society do not have an inherent tendency to violence. The feared robot apocalypse will be avoided by ensuring that all people have full “human” rights, as well as the same cultural membership as humans. Humans living in an open society—the only stable kind of society—choose their own rewards, internal as well as external. Their decisions are not, in the normal course of events, determined by a fear of punishment. Current worries about rogue AGIs mirror those that have always existed about rebellious youths—namely, that they might grow up deviating from the culture’s moral values. But today the source of all existential dangers from the growth of knowledge is not rebellious youths but weapons in the hands of the enemies of civilization, whether these weapons are mentally warped (or enslaved) AGIs, mentally warped teenagers, or any other weapon of mass destruction. Fortunately for civilization, the more a person’s creativity is forced into a monomaniacal channel, the more it is impaired in regard to overcoming unforeseen difficulties, just as happened for thousands of centuries. The worry that AGIs are uniquely dangerous because they could run on ever better hardware is a fallacy, since human thought will be accelerated by the same technology. We have been using tech-assisted thought since the invention of writing and tallying. Much the same holds for the worry that AGIs might get so good, qualitatively, at thinking, that humans would be to them as insects are to humans. All thinking is a form of computation, and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other. Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology. Those are the simple dos and don’ts of coping with AGIs. But how do we create an AGI in the first place? Could we cause them to evolve from a population of ape-type Als in a virtual environment? If such an experiment succeeded, it would be the most immoral in history, for we don’t know how to achieve that outcome without creating vast suffering along the way. Nor do we know how to prevent the evolution of a static culture. Elementary introductions to computers explain them as TOM, the Totally Obedient Moron—an inspired acronym that captures the essence of all computer programs to date: They have no idea what they are doing or why. So it won’t help to give Als more and more predetermined functionalities in the hope that these will eventually constitute Generality—the elusive Gin AGI. We are aiming for the opposite, a DATA: a Disobedient Autonomous Thinking Application. 89

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.