Can an emerging field called ‘neural systems understanding’ explain the brain? DOI
George Musser

The Transmitter, Journal Year: 2024, Volume and Issue: unknown

Published: Jan. 1, 2024

Language: Английский

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

et al.

Trends in Cognitive Sciences, Journal Year: 2024, Volume and Issue: 28(6), P. 517 - 540

Published: March 19, 2024

Language: Английский

Citations

116

Language in Brains, Minds, and Machines DOI
Greta Tuckute, Nancy Kanwisher, Evelina Fedorenko

et al.

Annual Review of Neuroscience, Journal Year: 2024, Volume and Issue: 47(1), P. 277 - 301

Published: April 26, 2024

It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey new purchase LMs are providing on question of how is implemented in brain. We discuss why, a priori, might be expected to share similarities with human system. then summarize evidence represent linguistic information similarly enough enable relatively accurate brain encoding decoding during processing. Finally, examine which LM properties—their architecture, task performance, or training—are critical capturing neural responses review studies using as silico model organisms testing hypotheses about These ongoing investigations bring us closer understanding representations processes underlie our ability comprehend sentences express thoughts

Language: Английский

Citations

13

Brain-model neural similarity reveals abstractive summarization performance DOI Creative Commons
Zhejun Zhang, S.-L. Guo,

W H Zhou

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Jan. 2, 2025

Language: Английский

Citations

1

Contextual feature extraction hierarchies converge in large language models and the brain DOI
Gavin Mischler,

Yinghao Aaron Li,

Stephan Bickel

et al.

Nature Machine Intelligence, Journal Year: 2024, Volume and Issue: unknown

Published: Nov. 26, 2024

Language: Английский

Citations

7

Lexical-Semantic Content, Not Syntactic Structure, Is the Main Contributor to ANN-Brain Similarity of fMRI Responses in the Language Network DOI Creative Commons
Carina Kauf, Greta Tuckute, Roger Lévy

et al.

Neurobiology of Language, Journal Year: 2023, Volume and Issue: 5(1), P. 7 - 42

Published: July 18, 2023

Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the network. To understand what aspects of linguistic stimuli contribute ANN-to-brain similarity, we used an fMRI data set responses

Language: Английский

Citations

16

Driving and suppressing the human language network using large language models DOI Creative Commons
Greta Tuckute, Aalok Sathe, Shashank Srikant

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: April 16, 2023

Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude response associated with each sentence. Then, use identify new sentences predicted drive or suppress in network. We these model-selected novel indeed strongly activity areas individuals. A systematic analysis reveals surprisal well-formedness linguistic input key determinants strength These results establish ability neural network not only mimic but also noninvasively control higher-level cortical areas, like

Language: Английский

Citations

14

Large Language Models and the Wisdom of Small Crowds DOI Creative Commons
Sean Trott

Open Mind, Journal Year: 2024, Volume and Issue: 8, P. 723 - 738

Published: Jan. 1, 2024

Abstract Recent advances in Large Language Models (LLMs) have raised the question of replacing human subjects with LLM-generated data. While some believe that LLMs capture “wisdom crowd”—due to their vast training data—empirical evidence for this hypothesis remains scarce. We present a novel methodological framework test this: “number needed beat” (NNB), which measures how many humans are sample’s quality rival achieved by GPT-4, state-of-the-art LLM. In series pre-registered experiments, we collect data and demonstrate utility method four psycholinguistic datasets English. find NNB > 1 each dataset, but also varies across tasks (and cases is quite small, e.g., 2). introduce two “centaur” methods combining LLM data, outperform both stand-alone samples. Finally, analyze trade-offs cost approach. clear limitations remain, suggest could guide decision-making about whether integrate into research pipeline.

Language: Английский

Citations

5

Social Learning in Neural Agent-Based Models DOI
Igor Douven

Philosophy of Science, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 21

Published: Oct. 29, 2024

Abstract Agent-based models (ABMs) are widely used to study how individual interactions shape collective behaviors. Critics argue that ABMs often too simplistic capture real-world complexities. We address this by integrating artificial neural networks into ABMs, focusing on enhancing the Hegselmann–Krause (HK) model. By using multilayer perceptrons as agents, we create more realistic better reflect actual agents. This approach yields multiple models, core elements of HK model can be defined in various ways. conduct two computational studies compare these with each other and traditional individual-learning paradigms.

Language: Английский

Citations

4

Cardiac Heterogeneity Prediction by Cardio-Neural Network Simulation DOI
Asif Mehmood,

Ayesha Ilyas,

H. Ilyas

et al.

Neuroinformatics, Journal Year: 2025, Volume and Issue: 23(2)

Published: Feb. 1, 2025

Language: Английский

Citations

0

A vectorial code for semantics in human hippocampus DOI Open Access
Melissa Franch,

Elizabeth A. Mickiewicz,

James L. Belanger

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 23, 2025

ABSTRACT As we listen to speech, our brains actively compute the meaning of individual words. Inspired by success large language models (LLMs), hypothesized that brain employs vectorial coding principles, such is reflected in distributed activity single neurons. We recorded responses hundreds neurons human hippocampus, which has a well-established role semantic coding, while participants listened narrative speech. find encoding contextual word simultaneous whose selectivities span multiple unrelated categories. Like embedding vectors models, distance between neural population correlates with distance; however, this effect was only observed (like BERT) and reversed non-contextual Word2Vec), suggesting depends critically on contextualization. Moreover, for subset highly semantically similar words, even embedders showed an inverse correlation distances; attribute pattern noise-mitigating benefits contrastive coding. Finally, further support critical context, range covaries lexical polysemy. Ultimately, these results hypothesis hippocampus follows principles.

Language: Английский

Citations

0