Deep language algorithms predict semantic comprehension from brain activity DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

и другие.

Scientific Reports, Год журнала: 2022, Номер 12(1)

Опубликована: Сен. 29, 2022

Abstract Deep language algorithms, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of automatic translation, summarization dialogue. However, whether these models encode information that relates human comprehension still remains controversial. Here, we show representations GPT-2 not only map onto brain responses spoken stories, but they also predict extent which subjects understand corresponding narratives. To this end, analyze 101 recorded with functional Magnetic Resonance Imaging while listening 70 min short stories. We then fit a linear mapping model activity from GPT-2’s activations. Finally, reliably correlates ( $$\mathcal {R}=0.50, p<10^{-15}$$ R=0.50,p<10-15 ) subjects’ scores as assessed for each story. This effect peaks in angular, medial temporal supra-marginal gyri, is best accounted by long-distance dependencies generated deep layers GPT-2. Overall, study shows how help clarify computations underlying comprehension.

Язык: Английский

Over-reliance on English hinders cognitive science DOI Creative Commons
Damián E. Blasí, Joseph Henrich, Evangelia Adamou

и другие.

Trends in Cognitive Sciences, Год журнала: 2022, Номер 26(12), С. 1153 - 1170

Опубликована: Окт. 14, 2022

English is the dominant language in study of human cognition and behavior: individuals studied by cognitive scientists, as well most scientists themselves, are frequently speakers. However, differs from other languages ways that have consequences for whole sciences, reaching far beyond itself. Here, we review an emerging body evidence highlights how particular characteristics linguistic habits speakers bias field both warping research programs (e.g., overemphasizing features mechanisms present over others) overgeneralizing observations speakers' behaviors, brains, to our entire species. We propose mitigating strategies could help avoid some these pitfalls.

Язык: Английский

Процитировано

243

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

и другие.

Proceedings of the National Academy of Sciences, Год журнала: 2022, Номер 119(32)

Опубликована: Авг. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Язык: Английский

Процитировано

232

Brains and algorithms partially converge in natural language processing DOI Creative Commons
Charlotte Caucheteux, Jean-Rémi King

Communications Biology, Год журнала: 2022, Номер 5(1)

Опубликована: Фев. 16, 2022

Deep learning algorithms trained to predict masked words from large amount of text have recently been shown generate activations similar those the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety deep language models identify computational principles that lead them brain-like representations sentences. Specifically, analyze brain responses 400 isolated sentences in cohort 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where when these maps onto responses. Finally, estimate how architecture, training, performance independently account generation representations. Our analyses reveal main findings. First, between primarily depends on their ability context. Second, reveals rise maintenance perceptual, lexical, compositional within cortical region. Overall, study shows modern partially converge towards solutions, thus delineates promising path unravel foundations natural processing.

Язык: Английский

Процитировано

213

Semantic reconstruction of continuous language from non-invasive brain recordings DOI

Jerry Tang,

Amanda LeBel, Shailee Jain

и другие.

Nature Neuroscience, Год журнала: 2023, Номер 26(5), С. 858 - 866

Опубликована: Май 1, 2023

Язык: Английский

Процитировано

205

An investigation across 45 languages and 12 language families reveals a universal language network DOI
Saima Malik-Moraleda,

Dima Ayyash,

Jeanne Gallée

и другие.

Nature Neuroscience, Год журнала: 2022, Номер 25(8), С. 1014 - 1019

Опубликована: Июль 18, 2022

Язык: Английский

Процитировано

193

Evidence of a predictive coding hierarchy in the human brain listening to speech DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

и другие.

Nature Human Behaviour, Год журнала: 2023, Номер 7(3), С. 430 - 441

Опубликована: Март 2, 2023

Abstract Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these models still fail match the abilities of humans. Predictive coding theory offers a tentative explanation this discrepancy: while optimized predict nearby words, human brain would continuously hierarchy representations that spans multiple timescales. To test hypothesis, we analysed functional magnetic resonance imaging signals 304 participants listening short stories. First, confirmed activations modern linearly map onto responses speech. Second, showed enhancing with predictions span timescales improves mapping. Finally, organized hierarchically: frontoparietal cortices higher-level, longer-range more contextual than temporal cortices. Overall, results strengthen role hierarchical predictive processing illustrate how synergy between neuroscience artificial intelligence can unravel computational bases cognition.

Язык: Английский

Процитировано

160

Symbols and mental programs: a hypothesis about human singularity DOI Creative Commons
Stanislas Dehaene, Fosca Al Roumi, Yair Lakretz

и другие.

Trends in Cognitive Sciences, Год журнала: 2022, Номер 26(9), С. 751 - 766

Опубликована: Авг. 3, 2022

Язык: Английский

Процитировано

127

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

и другие.

Trends in Cognitive Sciences, Год журнала: 2024, Номер 28(6), С. 517 - 540

Опубликована: Март 19, 2024

Язык: Английский

Процитировано

125

Using artificial neural networks to ask ‘why’ questions of minds and brains DOI Creative Commons
Nancy Kanwisher, Meenakshi Khosla, Katharina Dobs

и другие.

Trends in Neurosciences, Год журнала: 2023, Номер 46(3), С. 240 - 254

Опубликована: Янв. 17, 2023

Neuroscientists have long characterized the properties and functions of nervous system, are increasingly succeeding in answering how brains perform tasks they do. But question 'why' work way do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like now enables us approach these questions by asking when optimized a given task mirror behavioral characteristics humans performing same task. Here we highlight recent success this strategy explaining why visual auditory systems do, at both levels.

Язык: Английский

Процитировано

113

Deep problems with neural network models of human vision DOI
Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović

и другие.

Behavioral and Brain Sciences, Год журнала: 2022, Номер 46

Опубликована: Дек. 1, 2022

Abstract Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models biological vision. This conclusion is largely based on three sets findings: (1) DNNs more accurate than any other model taken from various datasets, (2) do job predicting pattern human errors behavioral (3) brain signals response to datasets (e.g., single cell responses or fMRI data). However, these not test hypotheses regarding what features contributing good predictions we show that may be mediated by share little overlap with More problematically, account for almost no results psychological research. contradicts common claim good, let alone best, object recognition. We argue theorists interested developing biologically plausible vision need direct their attention explaining findings. generally, build explain experiments manipulate independent variables designed rather compete making predictions. conclude briefly summarizing promising modeling approaches focus data.

Язык: Английский

Процитировано

111