Skip to main content
Skip to content
Case File
d-15691House OversightOther

Speculative discussion on human cognition and AI learning mechanisms

The text contains no concrete allegations, names, transactions, or links to powerful individuals or institutions. It is a general, speculative commentary on neuroscience and machine learning without i Compares gorilla and human developmental timelines. Hypothesizes language emergence from extended training, layered cognition, and internal rewards. Mentions Google image tagging controversy and AI r

Date
November 11, 2025
Source
House Oversight
Reference
House Oversight #026410
Pages
1
Persons
0
Integrity
No Hash Available

Summary

The text contains no concrete allegations, names, transactions, or links to powerful individuals or institutions. It is a general, speculative commentary on neuroscience and machine learning without i Compares gorilla and human developmental timelines. Hypothesizes language emergence from extended training, layered cognition, and internal rewards. Mentions Google image tagging controversy and AI r

Tags

neurosciencecognitive-scienceaimachine-learninghouse-oversight

Ask AI About This Document

0Share
PostReddit

Extracted Text (OCR)

EFTA Disclosure
Text extracted via OCR from the original document. May contain errors from the scanning process.
Gorillas can crawl after 2 months, and build their own nests after 2.5 years. They leave their mothers at 3-4 years. Human children are pretty much useless during the first 10-12 years, but during each phase, their brains have the opportunity to encounter many times as much training data as a gorilla brain. Humans are literally smarter on every level, and because the abilities of the higher levels depend on those of the lower levels, they can perform abstractions that mature gorillas will never learn, no matter how much we try to train them. The second set of mechanisms is in the motivational system. Motivation tells the brain what to pay attention to, by giving reward and punishment. If a brain does not get much reward for solving puzzles, the individual will find mathematics very boring and won't learn much of it. Ifa brain gets lots of rewards for discovering other people's intentions, it will learn a lot of social cognition. Language might be the result of three things that are different in humans: - extended training periods per layer (after the respective layer is done, it is difficult to learn a new set of phonemes or the first language) - more layers - different internal rewards. Perhaps the reward for learning grammatical structure is the same that makes us like music. Our brains may enjoy learning compositional regular structure, and they enjoy making themselves understood, and everything else is something the universal cortical learning figures out on its own. This is a hypothesis that is shared by a growing number of people these days. In humans, it is reflected for instance by the fact that races with faster motor development have lower IQ. (In individuals of the same group, slower development often indicates defects, of course.) Another support comes from machine learning: we find that the same learning functions can learn visual and auditory pattern recognition, and even end-to-end-learning. Google has built automatic image recognition into their current photo app: http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of- algorithms/ The state of the art in research can do better than that: it can begin to "Imagine" things. I.e. when the experimenter asks the system to "dream" what a certain object looks like, the system can produce a somewhat compelling image, which indicates that it is indeed learning visual structure. This stuff is something nobody could do a few months ago: http://www.creativeai.net/posts/Mv4WG6rdzAerZF7ch/synthesizing-preferred-inputs-via-deep-generator- networks A machine learning program that can learn how to play an Atari game without any human supervision or hand-crafted engineering (the feat that gave DeepMind 500M from Google) now just takes about 130 lines of Python code. These models do not have interesting motivational systems, and a relatively simple architecture. They currently seem to mimic some of the stuff that goes on in the first few layers of the cortex. They learn object features, visual styles, lighting and rotation in 3d, and simple action policies. Almost everything else is missing. But there is a lot of enthusiasm that the field might be on the right track, and that we can learn motor simulations and intuitive physics soon. (The majority of the people in AI do not work on this, however. They try to improve the performance for the current benchmarks.) Noam's criticism of machine translation mostly applies to the Latent Semantic Analysis models that Google and others have been using for many years. These models map linguistic symbols to concepts, and relate concepts to each other, but they do not relate the concepts to "proper" mental representations of what

Technical Artifacts (3)

View in Artifacts Browser

Email addresses, URLs, phone numbers, and other technical indicators extracted from this document.

URLhttp://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of
URLhttp://www.creativeai.net/posts/Mv4WG6rdzAerZF7ch/synthesizing-preferred-inputs-via-deep-generator
Wire Refreflected

Forum Discussions

This document was digitized, indexed, and cross-referenced with 1,400+ persons in the Epstein files. 100% free, ad-free, and independent.

Annotations powered by Hypothesis. Select any text on this page to annotate or highlight it.