Skip to main content
Skip to content
Case File
kaggle-ho-013155House Oversight

Speculative Framework for Biasing AGI Toward Friendliness and Global Brain Scenarios

Speculative Framework for Biasing AGI Toward Friendliness and Global Brain Scenarios The passage discusses abstract concepts about AGI development and global brain dynamics without naming any specific individuals, organizations, financial transactions, or concrete actions. It offers no actionable leads for investigations, merely theoretical ideas, resulting in low investigative usefulness and novelty. Key insights: Describes two phases of a 'global brain' mediated by AGI systems.; Speculates on future neurocomputer interfaces enabling direct thought exchange.; Highlights ethical concerns about subtle control and loss of individual autonomy.

Date
Unknown
Source
House Oversight
Reference
kaggle-ho-013155
Pages
1
Persons
0
Integrity
No Hash Available

Summary

Speculative Framework for Biasing AGI Toward Friendliness and Global Brain Scenarios The passage discusses abstract concepts about AGI development and global brain dynamics without naming any specific individuals, organizations, financial transactions, or concrete actions. It offers no actionable leads for investigations, merely theoretical ideas, resulting in low investigative usefulness and novelty. Key insights: Describes two phases of a 'global brain' mediated by AGI systems.; Speculates on future neurocomputer interfaces enabling direct thought exchange.; Highlights ethical concerns about subtle control and loss of individual autonomy.

Tags

kagglehouse-oversightai-ethicsagiglobal-brainneurotechnologyspeculative-governance

Ask AI About This Document

0Share
PostReddit
Review This Document

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
12.10 Conclusion: Eight Ways to Bias AGI Toward Friendliness 239 To make these ideas more concrete, we may speculatively reformulate the first two “global brain phases” mentioned above as follows: e Phase 1 global brain proto-mindplex: AI/AGI systems enhancing online databases, guiding Google results, forwarding e-mails, suggesting mailing-lists, etc. - generally using intelligence to mediate and guide human communications toward goals that are its own, but that are themselves guided by human goals, statements and actions e Phase 2 global brain mindplex: AGI systems composing documents, editing human-written documents, sending and receiving e-mails, assembling mailing lists and posting to them, creating new databases and instructing humans in their use, etc. In Phase 2, the conscious theater of the global-brain-mediating AGI system is composed of ideas built by numerous individual humans - or ideas emergent from ideas built by numerous individual humans - and it conceives ideas that guide the actions and thoughts of individual humans, in a way that is motivated by its own goals. It does not force the individual humans to do anything - but if a given human wishes to communicate and interact using the same databases, mailing lists and evolving vocabularies as other humans, they are going to have to use the products of the global brain mediating AGI, which means they are going to have to participate in its patterns and its activities. Of course, the advent of advanced neurocomputer interfaces makes the picture potentially more complex. At some point, it will likely be possible for humans to project thoughts and images directly into computers without going through mouse or keyboard - and to “read in” thoughts and images similarly. When this occurs, interaction between humans may in some con- texts become more like interactions between computers, and the role of global brain mediating AI servers may become one of mediating direct thought-to-thought exchanges between people. The ethical issues associated with global brain scenarios are in some ways even subtler than in the other scenarios we mentioned above. One has issues pertaining to the desirability of seeing the human race become something fundamentally different — something more social and networked, less individual and autonomous. One has the risk of AGI systems exerting a subtle but strong control over people, vaguely like the control that the human brain’s executive system exerts over the neurons involved with other brain subsystems. On the other hand, one also has more human empowerment than in some of the other scenarios — because the systems that are changing and deciding things are not separate from humans, but are, rather, composite systems essentially involving humans. So, in the global brain scenarios, one has more “human” empowerment than in some other cases — but the “humans” involved aren’t legacy humans like us, but heavily networked hu- mans that are largely characterized by the emergent dynamics and structures implicit in their interconnected activity! 12.10 Conclusion: Eight Ways to Bias AGI Toward Friendliness It would be nice if we had a simple, crisp, comforting conclusion to this chapter on AGI ethics, but it’s not the case. There is a certain irreducible uncertainty involved in creating advanced artificial minds. There is also a large irreducible uncertainty involved in the future of the human race in the case that we don’t create advanced artificial minds: in accordance with the ancient Chinese curse, we live in interesting times!

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,500+ persons in the Epstein files. 100% free, ad-free, and independent.

Support This ProjectSupported by 1,550+ people worldwide
Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.