Skip to main content
Skip to content
Case File
d-24960House OversightOther

Call for Open Algorithms and Data Transparency in Government AI Systems

The passage advocates for algorithmic transparency and data access in governmental AI decision‑making, but it provides no concrete names, transactions, dates, or specific allegations involving high‑pr Advocates "open algorithms" where inputs and outputs of AI systems are publicly visible. Suggests that lack of data hampers accountability in both AI and traditional government functions. Proposes us

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #016944
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The passage advocates for algorithmic transparency and data access in governmental AI decision‑making, but it provides no concrete names, transactions, dates, or specific allegations involving high‑pr Advocates "open algorithms" where inputs and outputs of AI systems are publicly visible. Suggests that lack of data hampers accountability in both AI and traditional government functions. Proposes us

Tags

government-oversightalgorithmic-accountabilitypolicy-proposalai-transparencypolicy-recommendationmachine-learninghouse-oversight

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
and have the results analyzed by the various stakeholders—trather like elected legislatures were originally intended to do. If we have the data that go into and out of each decision, we can easily ask, Is this a fair algorithm? Is this AI doing things that we as humans believe are ethical? This human-in-the-loop approach is called “open algorithms;” you get to see what the Als take as input and what they decide using that input. If you see those two things, you’ll know whether they’ re doing the right thing or the wrong thing. It turns out that’s not hard to do. If you control the data, then you control the AI. One thing people often fail to mention is that all the worries about AI are the same as the worries about today’s government. For most parts of the government—the justice system, et cetera—there’s no reliable data about what they’re doing and in what situation. How can you know whether the courts are fair or not if you don’t know the inputs and the outputs? The same problem arises with AI systems and is addressable in the same way. We need trusted data to hold current government to account in terms of what they take in and what they put out, and AI should be no different. Next-Generation AI Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force, so they need hundreds of millions of samples. They work because you can approximate anything with lots of little simple pieces. That’s a key insight of current AI research—that if you use reinforcement learning for credit- assignment feedback, you can get those little pieces to approximate whatever arbitrary function you want. But using the wrong functions to make decisions means the AI’s ability to make good decisions won’t generalize. If we give the AI new, different inputs, it may make completely unreasonable decisions. Or if the situation changes, then you need to retrain it. There are amusing techniques to find the “null space” in these AI systems. These are inputs that the AI thinks are valid examples of what it was trained to recognize (e.g., faces, cats, etc.), but to a human they’re crazy examples. Current AI is doing descriptive statistics in a way that’s not science and would be almost impossible to make into science. To build robust systems, we need to know the science behind data. The systems I view as next-generation Als result from this science- based approach: If you’re going to create an AI to deal with something physical, then you should build the laws of physics into it as your descriptive functions, in place of those stupid little neurons. For instance, we know that physics uses functions like polynomials, sine waves, and exponentials, so those should be your basis functions and not little linear neurons. By using those more appropriate basis functions, you need a lot less data, you can deal with a lot more noise, and you get much better results. As in the physics example, if we want to build an AI to work with human behavior, then we need to build the statistical properties of human networks into machine-learning algorithms. When you replace the stupid neurons with ones that capture the basics of human behavior, then you can identify trends with very little data, and you can deal with huge levels of noise. The fact that humans have a “commonsense” understanding that they bring to most problems suggests what I call the human strategy: Human society is a network just like the neural nets trained for deep learning, but the “neurons” in human society are a lot 141

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.