Dissecting neural computations in the human auditory pathway using deep neural networks for speech DOI Creative Commons
Yuanning Li, Gopala K. Anumanchipalli,

Abdelrahman Mohamed

et al.

Nature Neuroscience, Journal Year: 2023, Volume and Issue: 26(12), P. 2213 - 2225

Published: Oct. 30, 2023

Abstract The human auditory system extracts rich linguistic abstractions from speech signals. Traditional approaches to understanding this complex process have used linear feature-encoding models, with limited success. Artificial neural networks excel in recognition tasks and offer promising computational models of processing. We representations state-of-the-art deep network (DNN) investigate coding the nerve cortex. Representations hierarchical layers DNN correlated well activity throughout ascending system. Unsupervised performed at least as other purely supervised or fine-tuned models. Deeper were better higher-order cortex, computations aligned phonemic syllabic structures speech. Accordingly, trained on either English Mandarin predicted cortical responses native speakers each language. These results reveal convergence between model biological pathway, offering new for modeling

Language: Английский

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2022, Volume and Issue: 119(32)

Published: Aug. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Language: Английский

Citations

233

Evidence of a predictive coding hierarchy in the human brain listening to speech DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

et al.

Nature Human Behaviour, Journal Year: 2023, Volume and Issue: 7(3), P. 430 - 441

Published: March 2, 2023

Abstract Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these models still fail match the abilities of humans. Predictive coding theory offers a tentative explanation this discrepancy: while optimized predict nearby words, human brain would continuously hierarchy representations that spans multiple timescales. To test hypothesis, we analysed functional magnetic resonance imaging signals 304 participants listening short stories. First, confirmed activations modern linearly map onto responses speech. Second, showed enhancing with predictions span timescales improves mapping. Finally, organized hierarchically: frontoparietal cortices higher-level, longer-range more contextual than temporal cortices. Overall, results strengthen role hierarchical predictive processing illustrate how synergy between neuroscience artificial intelligence can unravel computational bases cognition.

Language: Английский

Citations

160

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

et al.

Trends in Cognitive Sciences, Journal Year: 2024, Volume and Issue: 28(6), P. 517 - 540

Published: March 19, 2024

Language: Английский

Citations

130

Using artificial neural networks to ask ‘why’ questions of minds and brains DOI Creative Commons
Nancy Kanwisher, Meenakshi Khosla, Katharina Dobs

et al.

Trends in Neurosciences, Journal Year: 2023, Volume and Issue: 46(3), P. 240 - 254

Published: Jan. 17, 2023

Neuroscientists have long characterized the properties and functions of nervous system, are increasingly succeeding in answering how brains perform tasks they do. But question 'why' work way do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like now enables us approach these questions by asking when optimized a given task mirror behavioral characteristics humans performing same task. Here we highlight recent success this strategy explaining why visual auditory systems do, at both levels.

Language: Английский

Citations

114

High-resolution image reconstruction with latent diffusion models from human brain activity DOI
Yu Takagi, Shinji Nishimoto

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2023, Volume and Issue: unknown, P. 14453 - 14463

Published: June 1, 2023

Reconstructing visual experiences from human brain activity offers a unique way to understand how the represents world, and interpret connection between computer vision models our system. While deep generative have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still challenging problem. Here, we propose new method based on diffusion model (DM) reconstruct obtained via functional magnetic resonance imaging (fMRI). More specifically, rely latent (LDM) termed Stable Diffusion. This reduces computational cost of DMs, while preserving their performance. We also characterize inner mechanisms LDM by studying its different components (such as vector image Z, conditioning inputs C, elements denoising U-Net) relate distinct functions. show that proposed can high-resolution in straight-forward fashion, without need any additional training fine-tuning complex deep-learning models. provide quantitative interpretation neuroscientific perspective. Overall, study proposes promising activity, provides framework understanding DMs. Please check out webpage at https://sites.google.com/view/stablediffusion-withbrain/.

Language: Английский

Citations

99

Probabilistic atlas for the language network based on precision fMRI data from >800 individuals DOI Creative Commons
Benjamin Lipkin, Greta Tuckute, Josef Affourtit

et al.

Scientific Data, Journal Year: 2022, Volume and Issue: 9(1)

Published: Aug. 29, 2022

Abstract Two analytic traditions characterize fMRI language research. One relies on averaging activations across individuals. This approach has limitations: because of inter-individual variability in the locations areas, any given voxel/vertex a common brain space is part network some individuals but others, may belong to distinct network. An alternative identifying areas each individual using functional ‘localizer’. Because its greater sensitivity, resolution, and interpretability, localization gaining popularity, it not always feasible, cannot be applied retroactively past studies. To bridge these disjoint approaches, we created probabilistic atlas data for an extensively validated localizer 806 enables estimating probability that location belongs network, thus can help interpret group-level activation peaks lesion locations, or select voxels/electrodes analysis. More meaningful comparisons findings studies should increase robustness replicability

Language: Английский

Citations

95

Can language models learn from explanations in context? DOI Creative Commons
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan

et al.

Published: Jan. 1, 2022

Andrew Lampinen, Ishita Dasgupta, Stephanie Chan, Kory Mathewson, Mh Tessler, Antonia Creswell, James McClelland, Jane Wang, Felix Hill. Findings of the Association for Computational Linguistics: EMNLP 2022.

Language: Английский

Citations

89

The language network as a natural kind within the broader landscape of the human brain DOI
Evelina Fedorenko, Anna A. Ivanova, Tamar I. Regev

et al.

Nature reviews. Neuroscience, Journal Year: 2024, Volume and Issue: 25(5), P. 289 - 312

Published: April 12, 2024

Language: Английский

Citations

75

Large Language Models Demonstrate the Potential of Statistical Learning in Language DOI Open Access
Pablo Contreras Kallens, Ross Deans Kristensen‐McLachlan, Morten H. Christiansen

et al.

Cognitive Science, Journal Year: 2023, Volume and Issue: 47(3)

Published: Feb. 25, 2023

Abstract To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science language. The complexity human hampered progress because studies language–especially those involving computational modeling–have only been able to deal with small fragments our skills. We suggest that most recent generation Large Language Models (LLMs) might finally provide tools determine empirically how much ability experience. LLMs are sophisticated deep learning architectures trained on vast amounts natural data, enabling them perform an impressive range tasks. argue that, despite their clear semantic pragmatic limitations, have already demonstrated human‐like grammatical without need built‐in grammar. Thus, while there learn about humans acquire use language, full‐fledged models scientists evaluate just far statistical take us explaining full

Language: Английский

Citations

74

Beyond simple laboratory studies: Developing sophisticated models to study rich behavior DOI Creative Commons
Antonella Maselli, Jeremy Gordon, Mattia Eluchans

et al.

Physics of Life Reviews, Journal Year: 2023, Volume and Issue: 46, P. 220 - 244

Published: July 13, 2023

Psychology and neuroscience are concerned with the study of behavior, internal cognitive processes, their neural foundations. However, most laboratory studies use constrained experimental settings that greatly limit range behaviors can be expressed. While focusing on restricted ensures methodological control, it risks impoverishing object study: by restricting we might miss key aspects function. In this article, argue psychology should increasingly adopt innovative designs, measurement methods, analysis techniques sophisticated computational models to probe rich, ecologically valid forms including social behavior. We discuss challenges studying rich behavior as well novel opportunities offered state-of-the-art methodologies new sensing technologies, highlight importance developing formal models. exemplify our arguments reviewing some recent streams research in psychology, other fields (e.g., sports analytics, ethology robotics) have addressed a model-based manner. hope these "success cases" will encourage psychologists neuroscientists extend toolbox behavioral – them processes they engage.

Language: Английский

Citations

46