The Transmitter, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
Language: Английский
The Transmitter, Journal Year: 2024, Volume and Issue: unknown
Published: Jan. 1, 2024
Language: Английский
Trends in Cognitive Sciences, Journal Year: 2024, Volume and Issue: 28(6), P. 517 - 540
Published: March 19, 2024
Language: Английский
Citations
116Annual Review of Neuroscience, Journal Year: 2024, Volume and Issue: 47(1), P. 277 - 301
Published: April 26, 2024
It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey new purchase LMs are providing on question of how is implemented in brain. We discuss why, a priori, might be expected to share similarities with human system. then summarize evidence represent linguistic information similarly enough enable relatively accurate brain encoding decoding during processing. Finally, examine which LM properties—their architecture, task performance, or training—are critical capturing neural responses review studies using as silico model organisms testing hypotheses about These ongoing investigations bring us closer understanding representations processes underlie our ability comprehend sentences express thoughts
Language: Английский
Citations
13Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)
Published: Jan. 2, 2025
Language: Английский
Citations
1Nature Machine Intelligence, Journal Year: 2024, Volume and Issue: unknown
Published: Nov. 26, 2024
Language: Английский
Citations
7Neurobiology of Language, Journal Year: 2023, Volume and Issue: 5(1), P. 7 - 42
Published: July 18, 2023
Representations from artificial neural network (ANN) language models have been shown to predict human brain activity in the network. To understand what aspects of linguistic stimuli contribute ANN-to-brain similarity, we used an fMRI data set responses
Language: Английский
Citations
16bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown
Published: April 16, 2023
Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude response associated with each sentence. Then, use identify new sentences predicted drive or suppress in network. We these model-selected novel indeed strongly activity areas individuals. A systematic analysis reveals surprisal well-formedness linguistic input key determinants strength These results establish ability neural network not only mimic but also noninvasively control higher-level cortical areas, like
Language: Английский
Citations
14Open Mind, Journal Year: 2024, Volume and Issue: 8, P. 723 - 738
Published: Jan. 1, 2024
Abstract Recent advances in Large Language Models (LLMs) have raised the question of replacing human subjects with LLM-generated data. While some believe that LLMs capture “wisdom crowd”—due to their vast training data—empirical evidence for this hypothesis remains scarce. We present a novel methodological framework test this: “number needed beat” (NNB), which measures how many humans are sample’s quality rival achieved by GPT-4, state-of-the-art LLM. In series pre-registered experiments, we collect data and demonstrate utility method four psycholinguistic datasets English. find NNB > 1 each dataset, but also varies across tasks (and cases is quite small, e.g., 2). introduce two “centaur” methods combining LLM data, outperform both stand-alone samples. Finally, analyze trade-offs cost approach. clear limitations remain, suggest could guide decision-making about whether integrate into research pipeline.
Language: Английский
Citations
5Philosophy of Science, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 21
Published: Oct. 29, 2024
Abstract Agent-based models (ABMs) are widely used to study how individual interactions shape collective behaviors. Critics argue that ABMs often too simplistic capture real-world complexities. We address this by integrating artificial neural networks into ABMs, focusing on enhancing the Hegselmann–Krause (HK) model. By using multilayer perceptrons as agents, we create more realistic better reflect actual agents. This approach yields multiple models, core elements of HK model can be defined in various ways. conduct two computational studies compare these with each other and traditional individual-learning paradigms.
Language: Английский
Citations
4Neuroinformatics, Journal Year: 2025, Volume and Issue: 23(2)
Published: Feb. 1, 2025
Language: Английский
Citations
0bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2025, Volume and Issue: unknown
Published: Feb. 23, 2025
ABSTRACT As we listen to speech, our brains actively compute the meaning of individual words. Inspired by success large language models (LLMs), hypothesized that brain employs vectorial coding principles, such is reflected in distributed activity single neurons. We recorded responses hundreds neurons human hippocampus, which has a well-established role semantic coding, while participants listened narrative speech. find encoding contextual word simultaneous whose selectivities span multiple unrelated categories. Like embedding vectors models, distance between neural population correlates with distance; however, this effect was only observed (like BERT) and reversed non-contextual Word2Vec), suggesting depends critically on contextualization. Moreover, for subset highly semantically similar words, even embedders showed an inverse correlation distances; attribute pattern noise-mitigating benefits contrastive coding. Finally, further support critical context, range covaries lexical polysemy. Ultimately, these results hypothesis hippocampus follows principles.
Language: Английский
Citations
0