Skip to main content
Skip to content
Case File
d-27681House OversightOther

Philosophical discussion on AI ethics, trolley problems, and bio‑technology without concrete allegations

The passage is a speculative essay on ethics, AI decision‑making, and biotechnology. It contains no specific names, dates, transactions, or actionable leads linking powerful actors to misconduct. Its Mentions AI trolley‑problem scenarios involving a "saintly POTUS" but provides no evidence or contex References evolutionary biologist Richard Dawkins and general debates on bio‑ethics (e.g., IVF, an

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016387
Pages
1
Persons
1
Integrity
No Hash Available

Summary

The passage is a speculative essay on ethics, AI decision‑making, and biotechnology. It contains no specific names, dates, transactions, or actionable leads linking powerful actors to misconduct. Its Mentions AI trolley‑problem scenarios involving a "saintly POTUS" but provides no evidence or contex References evolutionary biologist Richard Dawkins and general debates on bio‑ethics (e.g., IVF, an

Tags

trolley-problemai-ethicsmachine-learninggenetic-engineeringhouse-oversightbioethics

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
division has been critiqued by evolutionary biologist Richard Dawkins, myself, and others. We can discuss “should” if framed as “we should do X in order to achieve Y.” Which Y should be a high priority is not necessarily settled by democratic vote but might be settled by Darwinian vote. Value systems and religions wax and wane, diversify, diverge, and merge just as living species do: subject to selection. The ultimate “value” (the “should”’) is survival of genes and memes. Few religions say that there is no connection between our physical being and the spiritual world. Miracles are documented. Conflicts between Church doctrine and Galileo and Darwin are eventually resolved. Faith and ethics are widespread in our species and can be studied using scientific methods, including but not limited to fMRI, psychoactive drugs, questionnaires, et cetera. Very practically, we have to address the ethical rules that should be built in, learned, or probabilistically chosen for increasingly intelligent and diverse machines. We have a whole series of trolley problems. At what number of people in line for death should the computer decide to shift a moving trolley to one person? Ultimately this might be a deep-learning problem—one in which huge databases of facts and contingencies can be taken into account, some seemingly far from the ethics at hand. For example, the computer might infer that the person who would escape death if the trolley is left alone is a convicted terrorist recidivist loaded up with doomsday pathogens, or a saintly POTUS—or part of a much more elaborate chain of events in detailed alternative realities. If one of these problem descriptions seems paradoxical or illogical, it may be that the authors of the trolley problem have adjusted the weights on each sides of the balance such that hesitant indecision is inevitable. Alternatively, one can use misdirection to rig the system, such that the error modes are not at the level of attention. For example, in the Trolley Problem, the real ethical decision was made years earlier when pedestrians were given access to the rails— or even before that, when we voted to spend more on entertainment than on public safety. Questions that at first seem alien and troubling, like “Who owns the new minds, and who pays for their mistakes?” are similar to well-established laws about who owns and pays for the sins of a corporation. The Slippery Slopes We can (over)simplify ethics by claiming that certain scenarios won’t happen. The technical challenges or the bright red lines that cannot be crossed are reassuring, but the reality is that once the benefits seem to outweigh the risks (even briefly and barely), the red lines shift. Just before Louise Brown’s birth in 1978, many people were worried that she “would turn out to be a little monster, in some way, shape or form, deformed, something wrong with her.”*° Few would hold this view of in-vitro fertilization today. What technologies are lubricating the slope toward multiplex sentience? It is not merely deep machine-learning algorithms with Big Iron. We have engineered rodents to be significantly better at a variety of cognitive tasks as well as to exhibit other relevant traits, such as persistence and low anxiety. Will this be applicable to animals that are already at the door of humanlike intelligence? Several show self-recognition in a mirror test—chimpanzees, bonobos, orangutans, some dolphins and whales, and magpies. 45 “Then, Doctors ‘All Anxious’ About Test-tube Baby” http://edition.cnn.com/2003/HEALTH/parenting/07/25/cnna.copperman/ 167

Technical Artifacts (1)

View in Artifacts Browser

Email addresses, URLs, phone numbers, and other technical indicators extracted from this document.

URLhttp://edition.cnn.com/2003/HEALTH/parenting/07/25/cnna.copperman

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.