High-performing neural network models of visual cortex benefit from high latent dimensionality DOI Creative Commons
Eric Elmoznino, Michael Bonner

PLoS Computational Biology, Journal Year: 2024, Volume and Issue: 20(1), P. e1011792 - e1011792

Published: Jan. 10, 2024

Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles computational models in neuroscience. Here we examined geometry DNN visual cortex by quantifying latent dimensionality their natural image representations. A popular view holds that optimal DNNs compress representations onto low-dimensional subspaces achieve invariance and robustness, which suggests better should lower dimensional geometries. Surprisingly, found a strong trend opposite direction-neural with high-dimensional tended generalization performance when predicting cortical responses held-out stimuli both monkey electrophysiology human fMRI data. Moreover, high was associated learning new categories stimuli, suggesting higher are suited generalize beyond training domains. These findings suggest general principle whereby confers benefits cortex.

Language: Английский

The neuroconnectionist research programme DOI
Adrien Doerig,

Rowan P. Sommers,

Katja Seeliger

et al.

Nature reviews. Neuroscience, Journal Year: 2023, Volume and Issue: 24(7), P. 431 - 450

Published: May 30, 2023

Language: Английский

Citations

133

Using artificial neural networks to ask ‘why’ questions of minds and brains DOI Creative Commons
Nancy Kanwisher, Meenakshi Khosla, Katharina Dobs

et al.

Trends in Neurosciences, Journal Year: 2023, Volume and Issue: 46(3), P. 240 - 254

Published: Jan. 17, 2023

Neuroscientists have long characterized the properties and functions of nervous system, are increasingly succeeding in answering how brains perform tasks they do. But question 'why' work way do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like now enables us approach these questions by asking when optimized a given task mirror behavioral characteristics humans performing same task. Here we highlight recent success this strategy explaining why visual auditory systems do, at both levels.

Language: Английский

Citations

112

An ecologically motivated image dataset for deep learning yields better models of human vision DOI Creative Commons
Johannes Mehrer, Courtney J. Spoerer,

Emer C. Jones

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2021, Volume and Issue: 118(8)

Published: Feb. 15, 2021

Significance Inspired by core principles of information processing in the brain, deep neural networks (DNNs) have demonstrated remarkable success computer vision applications. At same time, trained on task object classification exhibit similarities to representations found primate visual system. This result is surprising because datasets commonly used for training are designed be engineering challenges. Here, we use linguistic corpus statistics and human concreteness ratings as guiding design a resource that more closely mirrors categories relevant humans. The ecoset, collection 1.5 million images from 565 basic-level categories. We show ecoset-trained DNNs yield better models higher-level cortex behavior.

Language: Английский

Citations

111

Deep problems with neural network models of human vision DOI
Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović

et al.

Behavioral and Brain Sciences, Journal Year: 2022, Volume and Issue: 46

Published: Dec. 1, 2022

Abstract Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models biological vision. This conclusion is largely based on three sets findings: (1) DNNs more accurate than any other model taken from various datasets, (2) do job predicting pattern human errors behavioral (3) brain signals response to datasets (e.g., single cell responses or fMRI data). However, these not test hypotheses regarding what features contributing good predictions we show that may be mediated by share little overlap with More problematically, account for almost no results psychological research. contradicts common claim good, let alone best, object recognition. We argue theorists interested developing biologically plausible vision need direct their attention explaining findings. generally, build explain experiments manipulate independent variables designed rather compete making predictions. conclude briefly summarizing promising modeling approaches focus data.

Language: Английский

Citations

107

Brain-like functional specialization emerges spontaneously in deep neural networks DOI Creative Commons
Katharina Dobs, Julio Martinez,

Alexander J.E. Kell

et al.

Science Advances, Journal Year: 2022, Volume and Issue: 8(11)

Published: March 16, 2022

The human brain contains multiple regions with distinct, often highly specialized functions, from recognizing faces to understanding language thinking about what others are thinking. However, it remains unclear why the cortex exhibits this high degree of functional specialization in first place. Here, we consider case face perception using artificial neural networks test hypothesis that segregation recognition reflects a computational optimization for broader problem visual and other categories. We find trained on object perform poorly vice versa optimized both tasks spontaneously segregate themselves into separate systems objects. then show varying degrees categories, revealing widespread tendency (without built-in task-specific inductive biases) lead machines and, conjecture, also brains.

Language: Английский

Citations

97

A self-supervised domain-general learning framework for human ventral stream representation DOI Creative Commons
Talia Konkle, George A. Alvarez

Nature Communications, Journal Year: 2022, Volume and Issue: 13(1)

Published: Jan. 25, 2022

Abstract Anterior regions of the ventral visual stream encode substantial information about object categories. Are top-down category-level forces critical for arriving at this representation, or can representation be formed purely through domain-general learning natural image structure? Here we present a fully self-supervised model which learns to represent individual images, rather than categories, such that views same are embedded nearby in low-dimensional feature space, distinctly from other recently encountered views. We find category implicitly emerges local similarity structure space. Further, these models learn hierarchical features capture brain responses across human stream, on par with category-supervised models. These results provide computational support framework guiding formation where proximate goal is not explicitly information, but instead unique, compressed descriptions world.

Language: Английский

Citations

83

Brain-inspired learning in artificial neural networks: A review DOI Creative Commons
Samuel Schmidgall,

Rojin Ziaei,

Jascha Achterberg

et al.

APL Machine Learning, Journal Year: 2024, Volume and Issue: 2(2)

Published: May 9, 2024

Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, robotics. However, there exist fundamental differences between ANNs’ operating mechanisms those of the biological brain, particularly concerning learning processes. This paper presents a comprehensive review current brain-inspired representations artificial networks. We investigate integration more biologically plausible mechanisms, such synaptic plasticity, to improve these networks’ capabilities. Moreover, we delve into potential advantages challenges accompanying this approach. In review, pinpoint promising avenues for future research rapidly advancing field, which could bring us closer understanding essence intelligence.

Language: Английский

Citations

38

Qualitative similarities and differences in visual object representations between brains and deep networks DOI Creative Commons
Georgin Jacob, R. T. Pramod, Harish Katti

et al.

Nature Communications, Journal Year: 2021, Volume and Issue: 12(1)

Published: March 25, 2021

Abstract Deep neural networks have revolutionized computer vision, and their object representations across layers match coarsely with visual cortical areas in the brain. However, whether these exhibit qualitative patterns seen human perception or brain remains unresolved. Here, we recast well-known perceptual phenomena terms of distance comparisons, ask they are present feedforward deep trained for recognition. Some were randomly initialized networks, such as global advantage effect, sparseness, relative size. Many others after recognition training, Thatcher mirror confusion, Weber’s law, size, multiple normalization correlated sparseness. Yet other absent 3D shape processing, surface invariance, occlusion, natural parts advantage. These findings indicate sufficient conditions emergence brains offer clues to properties that could be incorporated improve networks.

Language: Английский

Citations

102

SAYCam: A Large, Longitudinal Audiovisual Dataset Recorded From the Infant’s Perspective DOI Creative Commons
Jessica Sullivan,

Michelle Mei,

Andrew Perfors

et al.

Open Mind, Journal Year: 2021, Volume and Issue: 5, P. 20 - 29

Published: Jan. 1, 2021

We introduce a new resource: the SAYCam corpus. Infants aged 6-32 months wore head-mounted camera for approximately 2 hr per week, over course of two-and-a-half years. The result is large, naturalistic, longitudinal dataset infant- and child-perspective videos. Over 200,000 words naturalistic speech have already been transcribed. Similarly, searchable using number criteria (e.g., age participant, location, setting, objects present). resulting will be broad use to psychologists, linguists, computer scientists.

Language: Английский

Citations

93

Computational models of category-selective brain regions enable high-throughput tests of selectivity DOI Creative Commons
N. Apurva Ratan Murty, Pouya Bashivan, Alex Abate

et al.

Nature Communications, Journal Year: 2021, Volume and Issue: 12(1)

Published: Sept. 20, 2021

Abstract Cortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, evolution. But claims category selectivity are not quantitatively precise remain vulnerable empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response novel images in fusiform face area, parahippocampal place extrastriate body outperforming descriptive experts. We use these subject strong tests, by screening synthesizing predicted produce high responses. find high-response-predicted all unambiguous members hypothesized preferred each region. These results provide accurate, image-computable category-selective region, strengthen domain specificity brain, point way future research characterizing functional organization brain with unprecedented computational precision.

Language: Английский

Citations

83