Language in Brains, Minds, and Machines DOI
Greta Tuckute, Nancy Kanwisher, Evelina Fedorenko

и другие.

Annual Review of Neuroscience, Год журнала: 2024, Номер 47(1), С. 277 - 301

Опубликована: Апрель 26, 2024

It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey new purchase LMs are providing on question of how is implemented in brain. We discuss why, a priori, might be expected to share similarities with human system. then summarize evidence represent linguistic information similarly enough enable relatively accurate brain encoding decoding during processing. Finally, examine which LM properties—their architecture, task performance, or training—are critical capturing neural responses review studies using as silico model organisms testing hypotheses about These ongoing investigations bring us closer understanding representations processes underlie our ability comprehend sentences express thoughts

Язык: Английский

Shared computational principles for language processing in humans and deep language models DOI Creative Commons
Ariel Goldstein, Zaid Zada,

Eliav Buchnik

и другие.

Nature Neuroscience, Год журнала: 2022, Номер 25(3), С. 369 - 380

Опубликована: Март 1, 2022

Departing from traditional linguistic models, advances in deep learning have resulted a new type of predictive (autoregressive) language models (DLMs). Using self-supervised next-word prediction task, these generate appropriate responses given context. In the current study, nine participants listened to 30-min podcast while their brain were recorded using electrocorticography (ECoG). We provide empirical evidence that human and autoregressive DLMs share three fundamental computational principles as they process same natural narrative: (1) both are engaged continuous before word onset; (2) match pre-onset predictions incoming calculate post-onset surprise; (3) rely on contextual embeddings represent words contexts. Together, our findings suggest biologically feasible framework for studying neural basis language.

Язык: Английский

Процитировано

296

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

и другие.

Proceedings of the National Academy of Sciences, Год журнала: 2022, Номер 119(32)

Опубликована: Авг. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Язык: Английский

Процитировано

232

Brains and algorithms partially converge in natural language processing DOI Creative Commons
Charlotte Caucheteux, Jean-Rémi King

Communications Biology, Год журнала: 2022, Номер 5(1)

Опубликована: Фев. 16, 2022

Deep learning algorithms trained to predict masked words from large amount of text have recently been shown generate activations similar those the human brain. However, what drives this similarity remains currently unknown. Here, we systematically compare a variety deep language models identify computational principles that lead them brain-like representations sentences. Specifically, analyze brain responses 400 isolated sentences in cohort 102 subjects, each recorded for two hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where when these maps onto responses. Finally, estimate how architecture, training, performance independently account generation representations. Our analyses reveal main findings. First, between primarily depends on their ability context. Second, reveals rise maintenance perceptual, lexical, compositional within cortical region. Overall, study shows modern partially converge towards solutions, thus delineates promising path unravel foundations natural processing.

Язык: Английский

Процитировано

213

Evidence of a predictive coding hierarchy in the human brain listening to speech DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

и другие.

Nature Human Behaviour, Год журнала: 2023, Номер 7(3), С. 430 - 441

Опубликована: Март 2, 2023

Abstract Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these models still fail match the abilities of humans. Predictive coding theory offers a tentative explanation this discrepancy: while optimized predict nearby words, human brain would continuously hierarchy representations that spans multiple timescales. To test hypothesis, we analysed functional magnetic resonance imaging signals 304 participants listening short stories. First, confirmed activations modern linearly map onto responses speech. Second, showed enhancing with predictions span timescales improves mapping. Finally, organized hierarchically: frontoparietal cortices higher-level, longer-range more contextual than temporal cortices. Overall, results strengthen role hierarchical predictive processing illustrate how synergy between neuroscience artificial intelligence can unravel computational bases cognition.

Язык: Английский

Процитировано

160

The neuroconnectionist research programme DOI
Adrien Doerig,

Rowan P. Sommers,

Katja Seeliger

и другие.

Nature reviews. Neuroscience, Год журнала: 2023, Номер 24(7), С. 431 - 450

Опубликована: Май 30, 2023

Язык: Английский

Процитировано

136

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

и другие.

Trends in Cognitive Sciences, Год журнала: 2024, Номер 28(6), С. 517 - 540

Опубликована: Март 19, 2024

Язык: Английский

Процитировано

125

The language network as a natural kind within the broader landscape of the human brain DOI
Evelina Fedorenko, Anna A. Ivanova, Tamar I. Regev

и другие.

Nature reviews. Neuroscience, Год журнала: 2024, Номер 25(5), С. 289 - 312

Опубликована: Апрель 12, 2024

Язык: Английский

Процитировано

75

Do Large Language Models Know What Humans Know? DOI Creative Commons
Sean Trott, Cameron R. Jones, Tyler H. Chang

и другие.

Cognitive Science, Год журнала: 2023, Номер 47(7)

Опубликована: Июль 1, 2023

Humans can attribute beliefs to others. However, it is unknown what extent this ability results from an innate biological endowment or experience accrued through child development, particularly exposure language describing others' mental states. We test the viability of hypothesis by assessing whether models exposed large quantities human display sensitivity implied knowledge states characters in written passages. In pre-registered analyses, we present a linguistic version False Belief Task both participants and model, GPT-3. Both are sensitive beliefs, but while model significantly exceeds chance behavior, does not perform as well humans nor explain full their behavior-despite being more than would lifetime. This suggests that statistical learning may part how develop reason about others, other mechanisms also responsible.

Язык: Английский

Процитировано

57

Language is primarily a tool for communication rather than thought DOI
Evelina Fedorenko, Steven T. Piantadosi,

Edward Gibson

и другие.

Nature, Год журнала: 2024, Номер 630(8017), С. 575 - 586

Опубликована: Июнь 19, 2024

Язык: Английский

Процитировано

40

Driving and suppressing the human language network using large language models DOI
Greta Tuckute, Aalok Sathe, Shashank Srikant

и другие.

Nature Human Behaviour, Год журнала: 2024, Номер 8(3), С. 544 - 561

Опубликована: Янв. 3, 2024

Язык: Английский

Процитировано

35