Dynamic representation of multidimensional object properties in the human brain DOI Creative Commons
Lina Teichmann, Martin N. Hebart, Chris I. Baker

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Sept. 9, 2023

Abstract Our visual world consists of an immense number unique objects and yet, we are easily able to identify, distinguish, interact, reason about the things see within a few hundred milliseconds. This requires that integrate focus on wide array object properties support diverse behavioral goals. In current study, used large-scale comprehensively sampled stimulus set developed analysis approach determine if could capture how rich, multidimensional representations unfold over time in human brain. We modelled time-resolved MEG signals evoked by viewing single presentations tens thousands images based millions judgments. Extracting behavior-derived dimensions from similarity judgments, data-driven guide our understanding neural representation space found every dimension is reflected signal. Studying temporal profiles for different courses fell into two broad types, with either distinct early peak (∼125 ms) or slow rise late (∼300 ms). Further, effects were stable across participants, contrast later which showed more variability, suggesting peaks may carry stimulus-specific participant-specific information. Dimensions appeared be primarily those conceptual, conceptual variable people. Together, these data provide comprehensive account brain form basis rich nature vision.

Language: Английский

THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior DOI Creative Commons
Martin N. Hebart, Oliver Contier, Lina Teichmann

et al.

eLife, Journal Year: 2023, Volume and Issue: 12

Published: Feb. 27, 2023

Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements brain activity and behavior. Here, we present THINGS-data, multimodal collection large-scale neuroimaging behavioral datasets humans, comprising densely sampled functional MRI magnetoencephalographic recordings, as well 4.70 million similarity judgments response to thousands photographic images for up 1,854 concepts. THINGS-data is unique its breadth richly annotated objects, allowing testing countless hypotheses at scale while assessing reproducibility previous findings. Beyond insights promised by each individual dataset, multimodality allows combining much broader view into processing than previously possible. Our analyses demonstrate high quality provide five examples hypothesis-driven data-driven applications. constitutes core public release THINGS initiative (https://things-initiative.org) bridging gap between disciplines advancement cognitive neuroscience.

Language: Английский

Citations

52

Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features DOI
Changde Du, Kaicheng Fu,

Jinpeng Li

et al.

IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal Year: 2023, Volume and Issue: 45(9), P. 10760 - 10777

Published: March 30, 2023

Decoding human visual neural representations is a challenging task with great scientific significance in revealing vision-processing mechanisms and developing brain-like intelligent machines. Most existing methods are difficult to generalize novel categories that have no corresponding data for training. The two main reasons 1) the under-exploitation of multimodal semantic knowledge underlying 2) small number paired (stimuli-responses) training data. To overcome these limitations, this paper presents generic decoding method called BraVL uses learning brain-visual-linguistic features. We focus on modeling relationships between brain, linguistic features via deep generative models. Specifically, we leverage mixture-of-product-of-experts formulation infer latent code enables coherent joint generation all three modalities. learn more consistent representation improve efficiency case limited brain activity data, exploit both intra- inter-modality mutual information maximization regularization terms. In particular, our model can be trained under various semi-supervised scenarios incorporate textual obtained from extra categories. Finally, construct trimodal matching datasets, extensive experiments lead some interesting conclusions cognitive insights: practically possible good accuracy; models using combination perform much better than those either them alone; 3) perception may accompanied by influences represent semantics stimuli.

Language: Английский

Citations

36

Visual Representations: Insights from Neural Decoding DOI Creative Commons
Amanda K. Robinson, Genevieve L. Quek, Thomas A. Carlson

et al.

Annual Review of Vision Science, Journal Year: 2023, Volume and Issue: 9(1), P. 313 - 335

Published: March 8, 2023

Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to data decode represented brain. In this article, we review how decoding approaches advanced our understanding visual representations and discuss efforts characterize both complexity behavioral relevance these representations. We outline current consensus regarding spatiotemporal structure recent findings that suggest are at once robust perturbations, yet sensitive different mental states. Beyond physical world, work has shone light on instantiates internally generated states, for example, during imagery prediction. Going forward, remarkable potential assess functional human behavior, reveal change across development aging, uncover their presentation various disorders.

Language: Английский

Citations

25

Generating realistic neurophysiological time series with denoising diffusion probabilistic models DOI Creative Commons
Julius Vetter, Jakob H. Macke, Richard Gao

et al.

Patterns, Journal Year: 2024, Volume and Issue: 5(9), P. 101047 - 101047

Published: Aug. 29, 2024

Language: Английский

Citations

5

ChineseEEG: A Chinese Linguistic Corpora EEG Dataset for Semantic Alignment and Neural Decoding DOI Creative Commons
Xinyu Mou,

Cuilin He,

Liwei Tan

et al.

Scientific Data, Journal Year: 2024, Volume and Issue: 11(1)

Published: May 29, 2024

Abstract An Electroencephalography (EEG) dataset utilizing rich text stimuli can advance the understanding of how brain encodes semantic information and contribute to decoding in brain-computer interface (BCI). Addressing scarcity EEG datasets featuring Chinese linguistic stimuli, we present ChineseEEG dataset, a high-density complemented by simultaneous eye-tracking recordings. This was compiled while 10 participants silently read approximately 13 hours from two well-known novels. provides long-duration recordings, along with pre-processed sensor-level data embeddings reading materials extracted pre-trained natural language processing (NLP) model. As pilot derived significantly support research across neuroscience, NLP, linguistics. It establishes benchmark for decoding, aids development BCIs, facilitates exploration alignment between large models human cognitive processes. also aid into brain’s mechanisms within context language.

Language: Английский

Citations

4

Statistics-Fused Learning for Classification of Randomized Eeg Trials DOI
Wuxia Zhang,

Junchao Tian,

Xiaoyan Zhang

et al.

Published: Jan. 1, 2025

Language: Английский

Citations

0

Exploring Deep Learning Models for EEG Neural Decoding DOI

Laurits Dixen,

Stefan Heinrich, Paolo Burelli

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 162 - 175

Published: Jan. 1, 2025

Language: Английский

Citations

0

Decoding Natural Images from EEG Signals Using a learnable Multi-band Spatio-Temporal Encoder DOI
Zhiyuan Xue, Peng Xu, Junpeng Zhang

et al.

Published: Jan. 1, 2025

Language: Английский

Citations

0

An extensive dataset of spiking activity to reveal the syntax of the ventral stream DOI
Paolo Papale, Feng Wang, Matthew W. Self

et al.

Neuron, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 1, 2025

Language: Английский

Citations

0

Hands-Free Crowdsensing of Accessibility Barriers in Sidewalk Infrastructure: A Brain–Computer Interface Approach DOI
Xiaoshan Zhou, Carol C. Menassa, Vineet R. Kamat

et al.

Journal of Infrastructure Systems, Journal Year: 2025, Volume and Issue: 31(2)

Published: April 9, 2025

Language: Английский

Citations

0