Gain, not concomitant changes in spatial receptive field properties, improves task performance in a neural network attention model DOI Creative Commons
Kai J Fox, Daniel Birman, Justin L. Gardner

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: May 15, 2023

Attention allows us to focus sensory processing on behaviorally relevant aspects of the visual world. One potential mechanism attention is a change in gain responses. However, changing at early stages could have multiple downstream consequences for processing. Which, if any, these effects can account benefits detection and discrimination? Using model primate cortex we document how Gaussian-shaped modulation results changes spatial tuning properties. Forcing use only failed produce any benefit task performance. Instead, found that alone was both necessary sufficient explain category discrimination during attention. Our show give rise receptive fields which are not enhancing

Language: Английский

The neuroconnectionist research programme DOI
Adrien Doerig,

Rowan P. Sommers,

Katja Seeliger

et al.

Nature reviews. Neuroscience, Journal Year: 2023, Volume and Issue: 24(7), P. 431 - 450

Published: May 30, 2023

Language: Английский

Citations

134

An ecologically motivated image dataset for deep learning yields better models of human vision DOI Creative Commons
Johannes Mehrer, Courtney J. Spoerer,

Emer C. Jones

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2021, Volume and Issue: 118(8)

Published: Feb. 15, 2021

Significance Inspired by core principles of information processing in the brain, deep neural networks (DNNs) have demonstrated remarkable success computer vision applications. At same time, trained on task object classification exhibit similarities to representations found primate visual system. This result is surprising because datasets commonly used for training are designed be engineering challenges. Here, we use linguistic corpus statistics and human concreteness ratings as guiding design a resource that more closely mirrors categories relevant humans. The ecoset, collection 1.5 million images from 565 basic-level categories. We show ecoset-trained DNNs yield better models higher-level cortex behavior.

Language: Английский

Citations

112

Deep problems with neural network models of human vision DOI
Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović

et al.

Behavioral and Brain Sciences, Journal Year: 2022, Volume and Issue: 46

Published: Dec. 1, 2022

Abstract Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models biological vision. This conclusion is largely based on three sets findings: (1) DNNs more accurate than any other model taken from various datasets, (2) do job predicting pattern human errors behavioral (3) brain signals response to datasets (e.g., single cell responses or fMRI data). However, these not test hypotheses regarding what features contributing good predictions we show that may be mediated by share little overlap with More problematically, account for almost no results psychological research. contradicts common claim good, let alone best, object recognition. We argue theorists interested developing biologically plausible vision need direct their attention explaining findings. generally, build explain experiments manipulate independent variables designed rather compete making predictions. conclude briefly summarizing promising modeling approaches focus data.

Language: Английский

Citations

108

A self-supervised domain-general learning framework for human ventral stream representation DOI Creative Commons
Talia Konkle, George A. Alvarez

Nature Communications, Journal Year: 2022, Volume and Issue: 13(1)

Published: Jan. 25, 2022

Abstract Anterior regions of the ventral visual stream encode substantial information about object categories. Are top-down category-level forces critical for arriving at this representation, or can representation be formed purely through domain-general learning natural image structure? Here we present a fully self-supervised model which learns to represent individual images, rather than categories, such that views same are embedded nearby in low-dimensional feature space, distinctly from other recently encountered views. We find category implicitly emerges local similarity structure space. Further, these models learn hierarchical features capture brain responses across human stream, on par with category-supervised models. These results provide computational support framework guiding formation where proximate goal is not explicitly information, but instead unique, compressed descriptions world.

Language: Английский

Citations

83

Artificial Neural Network Language Models Predict Human Brain Responses to Language Even After a Developmentally Realistic Amount of Training DOI Creative Commons
Eghbal A. Hosseini, Martin Schrimpf, Yian Zhang

et al.

Neurobiology of Language, Journal Year: 2024, Volume and Issue: 5(1), P. 43 - 63

Published: Jan. 1, 2024

Abstract Artificial neural networks have emerged as computationally plausible models of human language processing. A major criticism these is that the amount training data they receive far exceeds humans during learning. Here, we use two complementary approaches to ask how models’ ability capture fMRI responses sentences affected by data. First, evaluate GPT-2 trained on 1 million, 10 100 or billion words against an benchmark. We consider 100-million-word model be developmentally in terms given this similar what children are estimated exposed first years life. Second, test performance a 9-billion-token dataset reach state-of-the-art next-word prediction benchmark at different stages training. Across both approaches, find (i) already achieve near-maximal capturing sentences. Further, (ii) lower perplexity—a measure performance—is associated with stronger alignment data, suggesting received enough sufficiently high also acquire representations predictive responses. In tandem, findings establish although some necessary for ability, realistic (∼100 million words) may suffice.

Language: Английский

Citations

17

Performance vs. competence in human–machine comparisons DOI Creative Commons
Chaz Firestone

Proceedings of the National Academy of Sciences, Journal Year: 2020, Volume and Issue: 117(43), P. 26562 - 26571

Published: Oct. 13, 2020

Does the human mind resemble machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict brain activity—raising exciting possibility such represent world we do. However, seemingly intelligent fail strange “unhumanlike” ways, threatening their status as models our minds. How know when human–machine behavioral differences reflect deep disparities underlying capacities, vs. failures are only superficial or peripheral? This article draws on a foundational insight from cognitive science—the distinction between performance competence —to encourage “species-fair” comparisons humans machines. The performance/competence urges us to consider whether failure system ideally hypothesized, one creature another, arises not because lacks relevant knowledge internal capacities (“competence”), but instead constraints demonstrating (“performance”). I argue this has been neglected by research comparing machine behavior, it should be essential any comparison. Focusing domain image classification, identify three factors contributing species-fairness comparisons, extracted recent work equates constraints. Species-fair level playing field natural artificial intelligence, so separate more those may enduring.

Language: Английский

Citations

125

Individual differences among deep neural network models DOI Creative Commons
Johannes Mehrer, Courtney J. Spoerer, Nikolaus Kriegeskorte

et al.

Nature Communications, Journal Year: 2020, Volume and Issue: 11(1)

Published: Nov. 12, 2020

Abstract Deep neural networks (DNNs) excel at visual recognition tasks and are increasingly used as a modeling framework for computations in the primate brain. Just like individual brains, each DNN has unique connectivity representational profile. Here, we investigate differences among instances that arise from varying only random initialization of network weights. Using tools typically employed systems neuroscience, show this minimal change initial conditions prior to training leads substantial intermediate higher-level representations despite similar network-level classification performance. We locate origins effects an under-constrained alignment category exemplars, rather than misaligned centroids. These results call into question common practice using single derive insights information processing suggest computational neuroscientists working with DNNs may need base their inferences on groups multiple instances.

Language: Английский

Citations

111

Reassessing hierarchical correspondences between brain and deep networks through direct interface DOI Creative Commons
Nicholas J. Sexton, Bradley C. Love

Science Advances, Journal Year: 2022, Volume and Issue: 8(28)

Published: July 13, 2022

Functional correspondences between deep convolutional neural networks (DCNNs) and the mammalian visual system support a hierarchical account in which successive stages of processing contain ever higher-level information. However, these brain model activity involve shared, not task-relevant, variance. We propose stricter correspondence: If DCNN layer corresponds to region, then replacing with should successfully drive DCNN’s object recognition decision. Using this approach on three datasets, we found that all regions along ventral stream best corresponded later layers, indicating contained information about category. Time course analyses suggest long-range recurrent connections transmit class from late early areas.

Language: Английский

Citations

54

A large and rich EEG dataset for modeling human visual object recognition DOI
Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig

et al.

NeuroImage, Journal Year: 2022, Volume and Issue: 264, P. 119754 - 119754

Published: Nov. 15, 2022

Language: Английский

Citations

39

Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions DOI Creative Commons
Greta Tuckute, Jenelle Feather, Dana Boebinger

et al.

PLoS Biology, Journal Year: 2023, Volume and Issue: 21(12), P. e3002366 - e3002366

Published: Dec. 13, 2023

Models that predict brain responses to stimuli provide one measure of understanding a sensory system and have many potential applications in science engineering. Deep artificial neural networks emerged as the leading such predictive models visual but are less explored audition. Prior work provided examples audio-trained produced good predictions auditory cortical fMRI exhibited correspondence between model stages regions, left it unclear whether these results generalize other network and, thus, how further improve this domain. We evaluated model-brain for publicly available audio along with in-house trained on 4 different tasks. Most tested outpredicted standard spectromporal filter-bank cortex systematic correspondence: Middle best predicted primary cortex, while deep non-primary cortex. However, some state-of-the-art substantially worse predictions. recognize speech background noise better than quiet, potentially because hearing imposes constraints biological representations. The training task influenced prediction quality specific tuning properties, overall resulting from multiple generally support promise audition, though they also indicate current do not explain their entirety.

Language: Английский

Citations

26