Skip to main content
Skip to content
Case File
d-15568House OversightOther

Essay on AGI risks and steering without specific actionable leads

The passage is a generic commentary on artificial general intelligence, lacking concrete names, transactions, dates, or specific allegations involving powerful actors. It offers no verifiable leads fo Discusses psychological and institutional biases against acknowledging AI risks. References historical analogies (nuclear weapons) to illustrate underestimation of emerging threats. Calls for adoptio

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016869
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The passage is a generic commentary on artificial general intelligence, lacking concrete names, transactions, dates, or specific allegations involving powerful actors. It offers no verifiable leads fo Discusses psychological and institutional biases against acknowledging AI risks. References historical analogies (nuclear weapons) to illustrate underestimation of emerging threats. Calls for adoptio

Tags

risk-assessmentagitechnology-policyai-safetyhouse-oversightautonomous-weapons

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
more efficiently by machines. The successful creation of AGI would be the biggest event in human history, so why is there so little serious discussion of what it might lead to? Here again, the answer involves multiple reasons. First, as Upton Sinclair famously quipped, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”!” For example, spokesmen for tech companies or university research groups often claim there are no risks attached to their activities even if they privately think otherwise. Sinclair’s observation may help explain not only reactions to risks from smoking and climate change but also why some treat technology as a new religion whose central articles of faith are that more technology is always better and whose heretics are clueless scaremongering Luddites. Second, humans have a long track record of wishful thinking, flawed extrapolation of the past, and underestimation of emerging technologies. Darwinian evolution endowed us with powerful fear of concrete threats, not of abstract threats from future technologies that are hard to visualize or even imagine. Consider trying to warn people in 1930 of a future nuclear arms race, when you couldn’t show them a single nuclear explosion video and nobody even knew how to build such weapons. Even top scientists can underestimate uncertainty, making forecasts that are either too optimistic— Where are those fusion reactors and flying cars?—or too pessimistic. Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933—less than twenty-four hours before Leo Szilard conceived of the nuclear chain reaction—that nuclear energy was “moonshine.” Essentially nobody at that time saw the nuclear arms race coming. Third, psychologists have discovered that we tend to avoid thinking of disturbing threats when we believe there’s nothing we can do about them anyway. In this case, however, there are many constructive things we can do, if we can get ourselves to start thinking about the issue. What can we do? I’m advocating a strategy change from “Let’s rush to build technology that makes us obsolete—what could possibly go wrong?” to “Let’s envision an inspiring future and steer toward it.” To motivate the effort required for steering, this strategy begins by envisioning an enticing destination. Although Hollywood’s futures tend to be dystopian, the fact is that AGI can help life flourish as never before. Everything I love about civilization is the product of intelligence, so if we can amplify our own intelligence with AGI, we have the potential to solve today’s and tomorrow’s thorniest problems, including disease, climate change, and poverty. The more detailed we can make our shared positive visions for the future, the more motivated we will be to work together to realize them. What should we do in terms of steering? The twenty-three Asilomar principles adopted in 2017 offer plenty of guidance, including these short-term goals: (1) An arms race in lethal autonomous weapons should be avoided. (2) The economic prosperity created by AI should be shared broadly, to benefit all of humanity. 7 Upton Sinclair, , Candidate for Governor: And How I Got Licked (Berkeley CA: University of California Press, 1994), p. 109. 66

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.