Combining computational controls with natural text reveals aspects of meaning composition DOI
Mariya Toneva, Tom M. Mitchell, Leila Wehbe

et al.

Nature Computational Science, Journal Year: 2022, Volume and Issue: 2(11), P. 745 - 757

Published: Nov. 28, 2022

Language: Английский

Semantic reconstruction of continuous language from non-invasive brain recordings DOI

Jerry Tang,

Amanda LeBel, Shailee Jain

et al.

Nature Neuroscience, Journal Year: 2023, Volume and Issue: 26(5), P. 858 - 866

Published: May 1, 2023

Language: Английский

Citations

205

Evidence of a predictive coding hierarchy in the human brain listening to speech DOI Creative Commons
Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

et al.

Nature Human Behaviour, Journal Year: 2023, Volume and Issue: 7(3), P. 430 - 441

Published: March 2, 2023

Abstract Considerable progress has recently been made in natural language processing: deep learning algorithms are increasingly able to generate, summarize, translate and classify texts. Yet, these models still fail match the abilities of humans. Predictive coding theory offers a tentative explanation this discrepancy: while optimized predict nearby words, human brain would continuously hierarchy representations that spans multiple timescales. To test hypothesis, we analysed functional magnetic resonance imaging signals 304 participants listening short stories. First, confirmed activations modern linearly map onto responses speech. Second, showed enhancing with predictions span timescales improves mapping. Finally, organized hierarchically: frontoparietal cortices higher-level, longer-range more contextual than temporal cortices. Overall, results strengthen role hierarchical predictive processing illustrate how synergy between neuroscience artificial intelligence can unravel computational bases cognition.

Language: Английский

Citations

160

The neuroconnectionist research programme DOI
Adrien Doerig,

Rowan P. Sommers,

Katja Seeliger

et al.

Nature reviews. Neuroscience, Journal Year: 2023, Volume and Issue: 24(7), P. 431 - 450

Published: May 30, 2023

Language: Английский

Citations

136

Dissociating language and thought in large language models DOI
Kyle Mahowald, Anna A. Ivanova, Idan Blank

et al.

Trends in Cognitive Sciences, Journal Year: 2024, Volume and Issue: 28(6), P. 517 - 540

Published: March 19, 2024

Language: Английский

Citations

130

Symbols and mental programs: a hypothesis about human singularity DOI Creative Commons
Stanislas Dehaene, Fosca Al Roumi, Yair Lakretz

et al.

Trends in Cognitive Sciences, Journal Year: 2022, Volume and Issue: 26(9), P. 751 - 766

Published: Aug. 3, 2022

Language: Английский

Citations

129

Using artificial neural networks to ask ‘why’ questions of minds and brains DOI Creative Commons
Nancy Kanwisher, Meenakshi Khosla, Katharina Dobs

et al.

Trends in Neurosciences, Journal Year: 2023, Volume and Issue: 46(3), P. 240 - 254

Published: Jan. 17, 2023

Neuroscientists have long characterized the properties and functions of nervous system, are increasingly succeeding in answering how brains perform tasks they do. But question 'why' work way do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like now enables us approach these questions by asking when optimized a given task mirror behavioral characteristics humans performing same task. Here we highlight recent success this strategy explaining why visual auditory systems do, at both levels.

Language: Английский

Citations

114

High-resolution image reconstruction with latent diffusion models from human brain activity DOI
Yu Takagi, Shinji Nishimoto

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2023, Volume and Issue: unknown, P. 14453 - 14463

Published: June 1, 2023

Reconstructing visual experiences from human brain activity offers a unique way to understand how the represents world, and interpret connection between computer vision models our system. While deep generative have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still challenging problem. Here, we propose new method based on diffusion model (DM) reconstruct obtained via functional magnetic resonance imaging (fMRI). More specifically, rely latent (LDM) termed Stable Diffusion. This reduces computational cost of DMs, while preserving their performance. We also characterize inner mechanisms LDM by studying its different components (such as vector image Z, conditioning inputs C, elements denoising U-Net) relate distinct functions. show that proposed can high-resolution in straight-forward fashion, without need any additional training fine-tuning complex deep-learning models. provide quantitative interpretation neuroscientific perspective. Overall, study proposes promising activity, provides framework understanding DMs. Please check out webpage at https://sites.google.com/view/stablediffusion-withbrain/.

Language: Английский

Citations

99

Decoding speech perception from non-invasive brain recordings DOI Creative Commons
Alexandre Défossez, Charlotte Caucheteux,

Jérémy Rapin

et al.

Nature Machine Intelligence, Journal Year: 2023, Volume and Issue: 5(10), P. 1097 - 1107

Published: Oct. 5, 2023

Language: Английский

Citations

78

Language is primarily a tool for communication rather than thought DOI
Evelina Fedorenko, Steven T. Piantadosi,

Edward Gibson

et al.

Nature, Journal Year: 2024, Volume and Issue: 630(8017), P. 575 - 586

Published: June 19, 2024

Language: Английский

Citations

40

Driving and suppressing the human language network using large language models DOI
Greta Tuckute, Aalok Sathe, Shashank Srikant

et al.

Nature Human Behaviour, Journal Year: 2024, Volume and Issue: 8(3), P. 544 - 561

Published: Jan. 3, 2024

Language: Английский

Citations

35