Can an emerging field called ‘neural systems understanding’ explain the brain? DOI
George Musser

The Transmitter, Journal Year: 2024, Volume and Issue: unknown

Published: Jan. 1, 2024

Language: Английский

Contrastive learning explains the emergence and function of visual category-selective regions DOI Creative Commons
Jacob S. Prince, George A. Alvarez, Talia Konkle

et al.

Science Advances, Journal Year: 2024, Volume and Issue: 10(39)

Published: Sept. 25, 2024

Modular and distributed coding theories of category selectivity along the human ventral visual stream have long existed in tension. Here, we present a reconciling framework—contrastive coding—based on series analyses relating within biological artificial neural networks. We discover that, models trained with contrastive self-supervised objectives over rich natural image diet, category-selective tuning naturally emerges for faces, bodies, scenes, words. Further, lesions these model units lead to selective, dissociable recognition deficits, highlighting their distinct functional roles information processing. Finally, pre-identified can predict responses all corresponding face-, scene-, body-, word-selective regions cortex, under highly constrained sparse positive encoding procedure. The success this single indicates that brain-like specialization emerge without category-specific learning pressures, as system learns untangle content. Contrastive coding, therefore, provides unifying account object emergence representation brain.

Language: Английский

Citations

10

Behavioral signatures of face perception emerge in deep neural networks optimized for face recognition DOI Creative Commons
Katharina Dobs,

Joanne Yuan,

Julio Martinez

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2023, Volume and Issue: 120(32)

Published: July 31, 2023

Human face recognition is highly accurate and exhibits a number of distinctive well-documented behavioral "signatures" such as the use characteristic representational space, disproportionate performance cost when stimuli are presented upside down, drop in accuracy for faces from races participant less familiar with. These other phenomena have long been taken evidence that "special". But why does human perception exhibit these properties first place? Here, we deep convolutional neural networks (CNNs) to test hypothesis all signatures result optimization task recognition. Indeed, predicted by this hypothesis, found CNNs trained on recognition, but not object even additionally detect while matching amount experience. To whether principle specific faces, optimized CNN car discrimination tested it upright inverted images. As perception, car-trained network showed vs. cars. Similarly, produced an inversion effect. findings show reflect well explained nature computations underlying may be so special after all.

Language: Английский

Citations

14

Animal models of the human brain: Successes, limitations, and alternatives DOI
Nancy Kanwisher

Current Opinion in Neurobiology, Journal Year: 2025, Volume and Issue: 90, P. 102969 - 102969

Published: Feb. 1, 2025

Language: Английский

Citations

0

Unraveling other-race face perception with GAN-based image reconstruction DOI
Moaz Shoura, Dirk Walther, Adrian Nestor

et al.

Behavior Research Methods, Journal Year: 2025, Volume and Issue: 57(4)

Published: March 14, 2025

Language: Английский

Citations

0

Digital Twin Studies for Reverse Engineering the Origins of Visual Intelligence DOI
Justin N. Wood, Lalit M. Pandey, Samantha M. W. Wood

et al.

Annual Review of Vision Science, Journal Year: 2024, Volume and Issue: 10(1), P. 145 - 170

Published: Sept. 15, 2024

What are the core learning algorithms in brains? Nativists propose that intelligence emerges from innate domain-specific knowledge systems, whereas empiricists domain-general systems learn experience. We address this debate by reviewing digital twin studies designed to reverse engineer newborn brains. In studies, animals and artificial agents raised same environments tested with tasks, permitting direct comparison of their abilities. Supporting empiricism, show animal-like object perception when trained on first-person visual experiences animals. nativism, produce prenatal (retinal waves). argue across humans, animals, machines can be explained a universal principle, which we call space-time fitting. Space-time fitting explains both empiricist nativist phenomena, providing unified framework for understanding origins intelligence.

Language: Английский

Citations

1

A contrastive coding account of category selectivity in the ventral visual stream DOI Creative Commons
Jacob S. Prince, George A. Alvarez, Talia Konkle

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Aug. 7, 2023

Abstract Modular and distributed coding theories of category selectivity along the human ventral visual stream have long existed in tension. Here, we present a reconciling framework – contrastive based on series analyses relating within biological artificial neural networks. We discover that, models trained with self-supervised objectives over rich natural image diet, category-selective tuning naturally emerges for faces, bodies, scenes, words. Further, lesions these model units lead to selective, dissociable recognition deficits, highlighting their distinct functional roles information processing. Finally, pre-identified can predict responses all corresponding face-, scene-, body-, word-selective regions cortex, under highly constrained sparse-positive encoding procedure. The success this single indicates that brain-like specialization emerge without category-specific learning pressures, as system learns untangle content. Contrastive coding, therefore, provides unifying account object emergence representation brain.

Language: Английский

Citations

3

The State of Modeling Face Processing in Humans with Deep Learning DOI Open Access
David White, P. Jonathon Phillips

Published: May 10, 2024

Recent studies show that ‘deep learning’ facial recognition can potentially model face processing in humans. In this review, we summarise insights from research and compare existing deep learning architectures to psychological models of the system. Psychological consist two components: (i) a core component extracts ‘face codes’ images for separate identity dynamic information; (ii) an extended links these codes broader person information, multimodal cues, social context. Initial work has examined produced by convolutional neural networks (DCNN) are engineered only. Given singular engineering goal, observation DCNN include multitude other attributes – example, illumination, pose, expression judgments challenges models. However, highlight limited extent which DCNNs capture human processing. Deep is not currently capable individual familiar faces, nor does it address how linked Closer alignment approaches necessary understanding information encoded transmitted brain.

Language: Английский

Citations

0

Concurrent emergence of view invariance, sensitivity to critical features, and identity face classification through visual experience: Insights from deep learning algorithms DOI Creative Commons

Mandy Rosemblaum,

Nitzan Guy, Idan Grosbard

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: June 8, 2024

Abstract Visual experience is known to play a critical role in face recognition. This believed enable the formation of view-invariant representation, by learning which features are for identification across views. Discovering these and type that needed uncover them challenging. We have recently revealed subset facial human further deep convolutional neural networks (DCNNs) trained on classification, but not object categorization, sensitive features, highlighting importance with faces system reveal features. These findings us now ask what required network become human-like whether it associated representation classification performance. To end, we systematically manipulated number within-identity between-identity images examined its effect performance sensitivity Results show increasing per identity as well identities were both simultaneous development successful classification. The concurrent emergence view invariance through implies they depend similar Overall, how systematic manipulation training diet DCNNs can shed light generation representations.

Language: Английский

Citations

0

The face inversion effect through the lens of deep neural networks DOI

Ehsan Tousi,

Marieke Mur

Proceedings of the Royal Society B Biological Sciences, Journal Year: 2024, Volume and Issue: 291(2028)

Published: Aug. 1, 2024

Language: Английский

Citations

0

Deep convolutional neural networks are sensitive to face configuration DOI Creative Commons
Virginia E. Strehle, Natalie Bendiksen, Alice J. O’Toole

et al.

Journal of Vision, Journal Year: 2024, Volume and Issue: 24(12), P. 6 - 6

Published: Nov. 5, 2024

Deep convolutional neural networks (DCNNs) are remarkably accurate models of human face recognition. However, less is known about whether these generate representations similar to those used by humans. Sensitivity facial configuration has long been considered a marker perceptual expertise for faces. We tested DCNNs trained identification "perceive" alterations features and their configuration. also compared the extent which changed as function alteration type. Facial was altered changing distance between eyes or nose mouth. were replacing mouth with another face. Altered faces processed (Ranjan et al., 2018; Szegedy 2017) similarity generated compared. Both sensitive configural feature changes—with changes altering DCNN more than features. To determine DCNNs' greater sensitivity due priori differences in images characteristics processing, we representation low-level, pixel-based DCNN-generated representations. increased from pixel-level image encoding, whereas did not change. The enhancement information may be utility discriminating among combined within-category nature training.

Language: Английский

Citations

0