Skip to main content
Skip to content
Case File
d-32809House OversightOther

Philosophical essay on AI risk and evolution with no concrete allegations

The text is a speculative discussion about AI risk and evolution, lacking any specific names, transactions, dates, or actionable leads involving powerful actors. It offers no novel investigative angle Discusses AI control problem in abstract terms References historical figures (Turing, Wiener, Good) without new claims No mention of individuals, institutions, or financial flows

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016875
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The text is a speculative discussion about AI risk and evolution, lacking any specific names, transactions, dates, or actionable leads involving powerful actors. It offers no novel investigative angle Discusses AI control problem in abstract terms References historical figures (Turing, Wiener, Good) without new claims No mention of individuals, institutions, or financial flows

Tags

evolutionhouse-oversightai-riskphilosophy

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us.”) Apparently, the original dissidents promulgating the AI-risk message were the AI pioneers themselves! Evolution’s Fatal Mistake There have been many arguments, some sophisticated and some less so, for why the Control Problem is real and not some science-fiction fantasy. Allow me to offer one that illustrates the magnitude of the problem: For the last hundred thousand years, the world (meaning the Earth, but the argument extends to the solar system and possibly even to the entire universe) has been in the human-brain regime. In this regime, the brains of Homo sapiens have been the most sophisticated future-shaping mechanisms (indeed, some have called them the most complicated objects in the universe). Initially, we didn’t use them for much beyond survival and tribal politics in a band of foragers, but now their effects are surpassing those of natural evolution. The planet has gone from producing forests to producing cities. As predicted by Turing, once we have superhuman AI (“the machine thinking method”), the human-brain regime will end. Look around you—you’ re witnessing the final decades of a hundred-thousand-year regime. This thought alone should give people some pause before they dismiss AI as just another tool. One of the world’s leading AI researchers recently confessed to me that he would be greatly relieved to learn that human-level AI was impossible for us to create. Of course, it might still take us a long time to develop human-level AI. But we have reason to suspect that this is not the case. After all, it didn’t take long, in relative terms, for evolution—the blind and clumsy optimization process—to create human-level intelligence once it had animals to work with. Or multicellular life, for that matter: Getting cells to stick together seems to have been much harder for evolution to accomplish than creating humans once there were multicellular organisms. Not to mention that our level of intelligence was limited by such grotesque factors as the width of the birth canal. Imagine an AI developer being stopped in his tracks because he couldn’t manage to adjust the font size on his computer! There’s an interesting symmetry here: In fashioning humans, evolution created a system that is, at least in many important dimensions, a more powerful planner and optimizer than evolution itself'is. We are the first species to understand that we’re the product of evolution. Moreover, we’ve created many artifacts (radios, firearms, spaceships) that evolution would have little hope of creating. Our future, therefore, will be determined by our own decisions and no longer by biological evolution. In that sense, evolution has fallen victim to its own Control Problem. We can only hope that we’re smarter than evolution in that sense. We are smarter, of course, but will that be enough? We’re about to find out. The Present Situation So here we are, more than half a century after the original warnings by Turing, Wiener, and Good, and a decade after people like me started paying attention to the AI-risk message. I’m glad to see that we’ve made a lot of progress in confronting this issue, but we're definitely not there yet. AI risk, although no longer a taboo topic, is not yet fully 72

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.