Skip to main content
Skip to content
Case File
d-37626House OversightOther

Speculative Discussion on AGI Societies and Global Brain Coordination

The text is an academic-style analysis of AI governance concepts without any mention of specific individuals, institutions, financial transactions, or actionable allegations. It offers no concrete lea Discusses collective human value definition versus machine optimization. Mentions Stephen Omohundro's ideas on AGI populations mitigating risk. Explores theoretical benefits and risks of AGI societie

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #013149
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The text is an academic-style analysis of AI governance concepts without any mention of specific individuals, institutions, financial transactions, or actionable allegations. It offers no concrete lea Discusses collective human value definition versus machine optimization. Mentions Stephen Omohundro's ideas on AGI populations mitigating risk. Explores theoretical benefits and risks of AGI societie

Tags

agi-governanceglobal-brainai-safetyhouse-oversighttheoretical-frameworks

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
12.8 Possible Benefits of Creating Societies of AGIs 233 we wish that extrapolated, interpreted as we wish that interpreted. While a moving humanistic vision, this seems to us rather difficult to implement in a computer algorithm in a compellingly "right" way. It seems that there would be many different ways of implementing it, and the choice between them would involve multiple, highly subtle and non- rigorous human judgment calls +. However, if a deep collective process of interactive scenario analysis and sharing is carried out, in order to arrive at some sort of Coherent Blended Volition, this process may well involve many of the same kinds of extrapolation that are conceived to be part of Coherent Extrapolated Volition. The core difference between the two approaches is that in the CEV vision, the extrapolation and coherentization are to be done by a highly intelligent, highly specialized software program, whereas in the approach suggested here, these are to be carried out by collective activity of humans as mediated by Global Brain technologies. Our perspective is that the definition of collective human values is probably better carried out via a process of human collaboration, rather than delegated to a machine optimization process; and also that the creation of deep-sharing-oriented Internet technologies, while a difficult task, is significantly easier and more likely to be done in the near future than the creation of narrow AI technology capable of effectively performing CEV style extrapolations. 12.8 Possible Benefits of Creating Societies of AGIs One potentially interesting quality of the emerging Global Brain is the possible presence within it of multiple interacting AGI systems. Stephen Omohundro [Omo09] has argued that this is an important aspect, and that game-theoretic dynamics related to populations of roughly equally powerful agents, may play a valuable role in mitigating the risks associated with advanced AGI systems. Roughly speaking, if one has a society of AGIs rather than a single AGI, and all the members of the society share roughly similar ethics, then if one AGI starts to go "off the rails", its compatriots will be in a position to correct its behavior. One may argue that this is actually a hypothesis about which AGI designs are safest, because a "community of AGIs" may be considered a single AGI with an internally community-like design. But the matter is a little subtler than that, if once considers AGI systems embedded in the Global Brain and human society. Then there is some substance to the notion of a population of AGIs systematically presenting themselves to humans and non-AGI software processes as separate entities. Of course, a society of AGIs is no protection against a single member undergoing a "hard takeoff" and drastically accelerating its intelligence simultaneously with shifting its ethical principles. In this sort of scenario, one could have a single AGI rapidly become much more powerful and very differently oriented than the others, who would be left impotent to act so as to preserve their values. But this merely defers the issue to the point to be considered below, regarding "takeoff speed." The operation of an AGI society may depend somewhat sensitively on the architectures of the AGI systems in question. Things will work better if the AGIs have a relatively easy way to inspect and comprehend much of the contents of each others’ minds. This introduces a bias toward AGIs that more heavily rely on more explicit forms of knowledge representation. | The reader is encouraged to look at the original CEV essay online (http: //singinst.org/upload/CEV. htm1) and make their own assessment.

Technical Artifacts (1)

View in Artifacts Browser

Email addresses, URLs, phone numbers, and other technical indicators extracted from this document.

Domainsinginst.org

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.