
bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2025, Volume and Issue: unknown
Published: Feb. 2, 2025
Abstract At the heart of language neuroscience lies a fundamental question: How does human brain process rich variety languages? Recent developments in Natural Language Processing, particularly multilingual neural network models, offer promising avenue to answer this question by providing theory-agnostic way representing linguistic content across languages. Our study leverages these advances ask how brains native speakers 21 languages respond stimuli, and what extent representations are similar We combined existing (12 4 families; n=24 participants) newly collected fMRI data (9 n=27 evaluate series encoding models predicting activity based on from diverse (20 8 model classes). found evidence cross-lingual robustness alignment between artificial biological networks. Critically, we showed that can be transferred zero-shot languages, so trained predict set account for responses held-out language, even families. These results imply shared component processing different plausibly related meaning space.
Language: Английский