Deep language algorithms predict semantic comprehension from brain activity DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

et al.

Scientific Reports, Journal Year: 2022, Volume and Issue: 12(1)

Published: Sept. 29, 2022

Abstract Deep language algorithms, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of automatic translation, summarization dialogue. However, whether these models encode information that relates human comprehension still remains controversial. Here, we show representations GPT-2 not only map onto brain responses spoken stories, but they also predict extent which subjects understand corresponding narratives. To this end, analyze 101 recorded with functional Magnetic Resonance Imaging while listening 70 min short stories. We then fit a linear mapping model activity from GPT-2’s activations. Finally, reliably correlates ( $$\mathcal {R}=0.50, p<10^{-15}$$ R=0.50,p<10-15 ) subjects’ scores as assessed for each story. This effect peaks in angular, medial temporal supra-marginal gyri, is best accounted by long-distance dependencies generated deep layers GPT-2. Overall, study shows how help clarify computations underlying comprehension.

Language: Английский

Over-reliance on English hinders cognitive science DOI Creative Commons
Damián E. Blasí, Joseph Henrich, Evangelia Adamou

et al.

Trends in Cognitive Sciences, Journal Year: 2022, Volume and Issue: 26(12), P. 1153 - 1170

Published: Oct. 14, 2022

English is the dominant language in study of human cognition and behavior: individuals studied by cognitive scientists, as well most scientists themselves, are frequently speakers. However, differs from other languages ways that have consequences for whole sciences, reaching far beyond itself. Here, we review an emerging body evidence highlights how particular characteristics linguistic habits speakers bias field both warping research programs (e.g., overemphasizing features mechanisms present over others) overgeneralizing observations speakers' behaviors, brains, to our entire species. We propose mitigating strategies could help avoid some these pitfalls.

Language: Английский

Citations

243

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2022, Volume and Issue: 119(32)

Published: Aug. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Language: Английский

Citations

232

Brains and algorithms partially converge in natural language processing DOI Creative Commons
Charlotte Caucheteux, Jean-Rémi King

Communications Biology, Journal Year: 2022, Volume and Issue: 5(1)

Published: Feb. 16, 2022

Deep learning algorithms trained to predict masked words from large amount of text have recently been shown generate activations similar those the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety deep language models identify computational principles that lead them brain-like representations sentences. Specifically, analyze brain responses 400 isolated sentences in cohort 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where when these maps onto responses. Finally, estimate how architecture, training, performance independently account generation representations. Our analyses reveal main findings. First, between primarily depends on their ability context. Second, reveals rise maintenance perceptual, lexical, compositional within cortical region. Overall, study shows modern partially converge towards solutions, thus delineates promising path unravel foundations natural processing.

Language: Английский

Citations

213

Semantic reconstruction of continuous language from non-invasive brain recordings DOI

Jerry Tang,

Amanda LeBel, Shailee Jain

et al.

Nature Neuroscience, Journal Year: 2023, Volume and Issue: 26(5), P. 858 - 866

Published: May 1, 2023

Language: Английский

Citations

205

An investigation across 45 languages and 12 language families reveals a universal language network DOI
Saima Malik-Moraleda,

Dima Ayyash,

Jeanne Gallée

et al.

Nature Neuroscience, Journal Year: 2022, Volume and Issue: 25(8), P. 1014 - 1019

Published: July 18, 2022

Language: Английский

Citations

193

Evidence of a predictive coding hierarchy in the human brain listening to speech DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

et al.

Nature Human Behaviour, Journal Year: 2023, Volume and Issue: 7(3), P. 430 - 441

Published: March 2, 2023

Abstract Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these models still fail match the abilities of humans. Predictive coding theory offers a tentative explanation this discrepancy: while optimized predict nearby words, human brain would continuously hierarchy representations that spans multiple timescales. To test hypothesis, we analysed functional magnetic resonance imaging signals 304 participants listening short stories. First, confirmed activations modern linearly map onto responses speech. Second, showed enhancing with predictions span timescales improves mapping. Finally, organized hierarchically: frontoparietal cortices higher-level, longer-range more contextual than temporal cortices. Overall, results strengthen role hierarchical predictive processing illustrate how synergy between neuroscience artificial intelligence can unravel computational bases cognition.

Language: Английский

Citations

160

Symbols and mental programs: a hypothesis about human singularity DOI Creative Commons
Stanislas Dehaene, Fosca Al Roumi, Yair Lakretz

et al.

Trends in Cognitive Sciences, Journal Year: 2022, Volume and Issue: 26(9), P. 751 - 766

Published: Aug. 3, 2022

Language: Английский

Citations

127

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

et al.

Trends in Cognitive Sciences, Journal Year: 2024, Volume and Issue: 28(6), P. 517 - 540

Published: March 19, 2024

Language: Английский

Citations

125

Using artificial neural networks to ask ‘why’ questions of minds and brains DOI Creative Commons
Nancy Kanwisher, Meenakshi Khosla, Katharina Dobs

et al.

Trends in Neurosciences, Journal Year: 2023, Volume and Issue: 46(3), P. 240 - 254

Published: Jan. 17, 2023

Neuroscientists have long characterized the properties and functions of nervous system, are increasingly succeeding in answering how brains perform tasks they do. But question 'why' work way do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like now enables us approach these questions by asking when optimized a given task mirror behavioral characteristics humans performing same task. Here we highlight recent success this strategy explaining why visual auditory systems do, at both levels.

Language: Английский

Citations

113

Deep problems with neural network models of human vision DOI
Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović

et al.

Behavioral and Brain Sciences, Journal Year: 2022, Volume and Issue: 46

Published: Dec. 1, 2022

Abstract Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models biological vision. This conclusion is largely based on three sets findings: (1) DNNs more accurate than any other model taken from various datasets, (2) do job predicting pattern human errors behavioral (3) brain signals response to datasets (e.g., single cell responses or fMRI data). However, these not test hypotheses regarding what features contributing good predictions we show that may be mediated by share little overlap with More problematically, account for almost no results psychological research. contradicts common claim good, let alone best, object recognition. We argue theorists interested developing biologically plausible vision need direct their attention explaining findings. generally, build explain experiments manipulate independent variables designed rather compete making predictions. conclude briefly summarizing promising modeling approaches focus data.

Language: Английский

Citations

111