The spatiotemporal neural dynamics of object location representations in the human brain DOI Creative Commons
Monika Graumann,

Caterina Ciuffi,

Kshitij Dwivedi

et al.

Nature Human Behaviour, Journal Year: 2022, Volume and Issue: 6(6), P. 796 - 811

Published: Feb. 24, 2022

Abstract To interact with objects in complex environments, we must know what they are and where spite of challenging viewing conditions. Here, investigated where, how when representations object location category emerge the human brain appear on cluttered natural scene images using a combination functional magnetic resonance imaging, electroencephalography computational models. We found to along ventral visual stream towards lateral occipital complex, mirrored by gradual emergence deep neural networks. Time-resolved analysis suggested that computing involves recurrent processing high-level cortex. Object also emerged gradually stream, evidence for computations. These results resolve spatiotemporal dynamics give rise present under

Language: Английский

The neural architecture of language: Integrative modeling converges on predictive processing DOI
Martin Schrimpf, Idan Blank, Greta Tuckute

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2021, Volume and Issue: 118(45)

Published: Nov. 4, 2021

Significance Language is a quintessentially human ability. Research has long probed the functional architecture of language in mind and brain using diverse neuroimaging, behavioral, computational modeling approaches. However, adequate neurally-mechanistic accounts how meaning might be extracted from are sorely lacking. Here, we report first step toward addressing this gap by connecting recent artificial neural networks machine learning to recordings during processing. We find that most powerful models predict behavioral responses across different datasets up noise levels. Models perform better at predicting next word sequence also measurements—providing computationally explicit evidence predictive processing fundamentally shapes comprehension mechanisms brain.

Language: Английский

Citations

363

Extensive sampling for complete models of individual brains DOI Creative Commons
Thomas Naselaris, Emily J. Allen, Kendrick Kay

et al.

Current Opinion in Behavioral Sciences, Journal Year: 2021, Volume and Issue: 40, P. 45 - 51

Published: Jan. 23, 2021

Language: Английский

Citations

138

Deep problems with neural network models of human vision DOI
Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović

et al.

Behavioral and Brain Sciences, Journal Year: 2022, Volume and Issue: 46

Published: Dec. 1, 2022

Abstract Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models biological vision. This conclusion is largely based on three sets findings: (1) DNNs more accurate than any other model taken from various datasets, (2) do job predicting pattern human errors behavioral (3) brain signals response to datasets (e.g., single cell responses or fMRI data). However, these not test hypotheses regarding what features contributing good predictions we show that may be mediated by share little overlap with More problematically, account for almost no results psychological research. contradicts common claim good, let alone best, object recognition. We argue theorists interested developing biologically plausible vision need direct their attention explaining findings. generally, build explain experiments manipulate independent variables designed rather compete making predictions. conclude briefly summarizing promising modeling approaches focus data.

Language: Английский

Citations

107

Large-scale evidence for logarithmic effects of word predictability on reading time DOI Creative Commons
Cory Shain, Clara Meister, Tiago Pimentel

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2024, Volume and Issue: 121(10)

Published: Feb. 29, 2024

During real-time language comprehension, our minds rapidly decode complex meanings from sequences of words. The difficulty doing so is known to be related words’ contextual predictability, but what cognitive processes do these predictability effects reflect? In one view, reflect facilitation due anticipatory processing words that are predictable context. This view predicts a linear effect on demand. another the costs probabilistic inference over sentence interpretations. either logarithmic or superlogarithmic demand, depending whether it assumes pressures toward uniform distribution information time. empirical record currently mixed. Here, we revisit this question at scale: We analyze six reading datasets, estimate next-word probabilities with diverse statistical models, and model times using recent advances in nonlinear regression. Results support word difficulty, which favors as key component human processing.

Language: Английский

Citations

26

Neuromorphic computing at scale DOI
Dhireesha Kudithipudi, Catherine D. Schuman, Craig M. Vineyard

et al.

Nature, Journal Year: 2025, Volume and Issue: 637(8047), P. 801 - 812

Published: Jan. 22, 2025

Language: Английский

Citations

4

Next-generation deep learning based on simulators and synthetic data DOI
Celso M. de Melo, Antonio Torralba, Leonidas Guibas

et al.

Trends in Cognitive Sciences, Journal Year: 2021, Volume and Issue: 26(2), P. 174 - 187

Published: Dec. 23, 2021

Language: Английский

Citations

103

Computational models of category-selective brain regions enable high-throughput tests of selectivity DOI Creative Commons
N. Apurva Ratan Murty, Pouya Bashivan, Alex Abate

et al.

Nature Communications, Journal Year: 2021, Volume and Issue: 12(1)

Published: Sept. 20, 2021

Abstract Cortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, evolution. But claims category selectivity are not quantitatively precise remain vulnerable empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response novel images in fusiform face area, parahippocampal place extrastriate body outperforming descriptive experts. We use these subject strong tests, by screening synthesizing predicted produce high responses. find high-response-predicted all unambiguous members hypothesized preferred each region. These results provide accurate, image-computable category-selective region, strengthen domain specificity brain, point way future research characterizing functional organization brain with unprecedented computational precision.

Language: Английский

Citations

83

The neural architecture of language: Integrative modeling converges on predictive processing DOI Creative Commons
Martin Schrimpf, Idan Blank, Greta Tuckute

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2020, Volume and Issue: unknown

Published: June 27, 2020

Abstract The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets computational models. By revealing trends models, this yields novel insights into cognitive neural mechanisms the target domain. We here present a first systematic study taking to higher-level cognition: human language processing, our species’ signature skill. find that most powerful ‘transformer’ models predict nearly 100% explainable variance responses sentences generalize different imaging modalities (fMRI, ECoG). Models’ fits (‘brain score’) behavioral both strongly correlated model accuracy on next-word prediction task (but not other tasks). Model architecture appears substantially contribute fit. These results provide computationally explicit evidence predictive processing fundamentally shapes comprehension brain. Significance Language is quintessentially ability. Research long probed functional mind using diverse imaging, behavioral, approaches. However, adequate neurally mechanistic accounts how meaning might be extracted from sorely lacking. Here, we report important step toward addressing gap by connecting recent artificial networks machine learning recordings during processing. up noise levels. Models perform better at predicting next word sequence also measurements – providing

Language: Английский

Citations

77

Measuring and modeling the motor system with machine learning DOI Creative Commons
Sebastien B Hausmann, Alessandro Marin Vargas, Alexander Mathis

et al.

Current Opinion in Neurobiology, Journal Year: 2021, Volume and Issue: 70, P. 11 - 23

Published: June 8, 2021

The utility of machine learning in understanding the motor system is promising a revolution how to collect, measure, and analyze data. field movement science already elegantly incorporates theory engineering principles guide experimental work, this review we discuss growing use learning: from pose estimation, kinematic analyses, dimensionality reduction, closed-loop feedback, its neural correlates untangling sensorimotor systems. We also give our perspective on new avenues where markerless motion capture combined with biomechanical modeling networks could be platform for hypothesis-driven research.

Language: Английский

Citations

58

Capturing the objects of vision with neural networks DOI
Benjamin Peters, Nikolaus Kriegeskorte

Nature Human Behaviour, Journal Year: 2021, Volume and Issue: 5(9), P. 1127 - 1144

Published: Sept. 20, 2021

Language: Английский

Citations

56