Dynamic representation of multidimensional object properties in the human brain DOI Creative Commons
Lina Teichmann, Martin N. Hebart, Chris I. Baker

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Sept. 9, 2023

Abstract Our visual world consists of an immense number unique objects and yet, we are easily able to identify, distinguish, interact, reason about the things see within a few hundred milliseconds. This requires that integrate focus on wide array object properties support diverse behavioral goals. In current study, used large-scale comprehensively sampled stimulus set developed analysis approach determine if could capture how rich, multidimensional representations unfold over time in human brain. We modelled time-resolved MEG signals evoked by viewing single presentations tens thousands images based millions judgments. Extracting behavior-derived dimensions from similarity judgments, data-driven guide our understanding neural representation space found every dimension is reflected signal. Studying temporal profiles for different courses fell into two broad types, with either distinct early peak (∼125 ms) or slow rise late (∼300 ms). Further, effects were stable across participants, contrast later which showed more variability, suggesting peaks may carry stimulus-specific participant-specific information. Dimensions appeared be primarily those conceptual, conceptual variable people. Together, these data provide comprehensive account brain form basis rich nature vision.

Language: Английский

Teaching CORnet human fMRI representations for enhanced model-brain alignment DOI Creative Commons
Zitong Lu, Yile Wang

Cognitive Neurodynamics, Journal Year: 2025, Volume and Issue: 19(1)

Published: April 15, 2025

Abstract Deep convolutional neural networks (DCNNs) have demonstrated excellent performance in object recognition and been found to share some similarities with brain visual processing. However, the substantial gap between DCNNs human perception still exists. Functional magnetic resonance imaging (fMRI) as a widely used technique cognitive neuroscience can record activation cortex during process of perception. Can we teach fMRI signals achieve more brain-like model? To answer this question, study proposed ReAlnet-fMRI, model based on SOTA vision CORnet but optimized using data through multi-layer encoding-based alignment framework. This framework has shown effectively enable learn representations. The fMRI-optimized ReAlnet-fMRI exhibited higher similarity than both control within- across-subject well across-modality model-brain (fMRI EEG) evaluations. Additionally, conducted an in-depth analysis investigate how internal representations differ from encoding various dimensions. These findings provide possibility enhancing brain-likeness models by integrating data, helping bridge computer neuroscience.

Language: Английский

Citations

0

A simultaneous EEG and eye-tracking dataset for remote sensing object detection DOI Creative Commons
Bing He, Hongqiang Zhang, Tong Qin

et al.

Scientific Data, Journal Year: 2025, Volume and Issue: 12(1)

Published: April 17, 2025

We introduce the EEGET-RSOD, a simultaneous electroencephalography (EEG) and eye-tracking dataset for remote sensing object detection. This contains EEG data when 38 experts located specific objects in 1,000 images within limited time frame. task reflects typical cognitive processes associated with human visual search identification imagery. To our knowledge, EEGET-RSOD is first publicly available to offer synchronized images. will not only advance study of cognition real-world environment, but also bridge gap between artificial intelligence, enhancing interpretability reliability AI models geospatial applications.

Language: Английский

Citations

0

A multi-subject and multi-session EEG dataset for modelling human visual object recognition DOI Creative Commons

Shuning Xue,

Bu Jin, Jie Jiang

et al.

Scientific Data, Journal Year: 2025, Volume and Issue: 12(1)

Published: April 19, 2025

We share a multi-subject and multi-session (MSS) dataset with 122-channel electroencephalographic (EEG) signals collected from 32 human participants. The data was obtained during serial visual presentation experiments in two paradigms. Dataset of first paradigm consists around 800,000 trials presenting stimulus sequences at 5 Hz. second comprises 40,000 displaying each image for 1 second. Each participant completed between to sessions on different days, session lasted approximately 1.5 hours EEG recording. set used the included 10,000 images, 500 images per class, manually selected PASCAL ImageNet databases. MSS can be useful various studies, including but not limited (1) exploring characteristics response, (2) comparing differences response paradigms, (3) designing machine learning algorithms cross-subject cross-session brain-computer interfaces (BCIs) using multiple subjects sessions.

Language: Английский

Citations

0

Assessing Driver Cognitive Load from Handsfree Mobile Phone Use: Innovative Analysis Approach Based on Heart Rate, Blood Pressure and Machine Learning DOI Creative Commons
Mhd Saeed Sharif,

Boniface Ndubuisi Ossai,

Jijomon Chettuthara Moncy

et al.

International Journal of Human-Computer Interaction, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 16

Published: May 8, 2025

Language: Английский

Citations

0

Decoding visual brain representations from electroencephalography through knowledge distillation and latent diffusion models DOI Creative Commons
Matteo Ferrante, Tommaso Boccato,

Stefano Bargione

et al.

Computers in Biology and Medicine, Journal Year: 2024, Volume and Issue: 178, P. 108701 - 108701

Published: June 7, 2024

Decoding visual representations from human brain activity has emerged as a thriving research domain, particularly in the context of brain–computer interfaces. Our study presents an innovative method that employs knowledge distillation to train EEG classifier and reconstruct images ImageNet THINGS-EEG 2 datasets using only electroencephalography (EEG) data participants who have viewed themselves (i.e. "brain decoding"). We analyzed recordings 6 for dataset 10 dataset, exposed spanning unique semantic categories. These readings were converted into spectrograms, which then used convolutional neural network (CNN), integrated with procedure based on pre-trained Contrastive Language-Image Pre-Training (CLIP)-based image classification teacher network. This strategy allowed our model attain top-5 accuracy 87%, significantly outperforming standard CNN various RNN-based benchmarks. Additionally, we incorporated reconstruction mechanism latent diffusion models, us generate estimate had elicited activity. Therefore, architecture not decodes but also offers credible only, paving way for, e.g., swift, individualized feedback experiments.

Language: Английский

Citations

3

Advancing EEG-based brain-computer interface technology via PEDOT:PSS electrodes DOI
Yang Li, Yuzhe Gu, Junchen Teng

et al.

Matter, Journal Year: 2024, Volume and Issue: 7(9), P. 2859 - 2895

Published: Sept. 1, 2024

Language: Английский

Citations

3

Ecological decoding of visual aesthetic preference with oscillatory electroencephalogram features—A mini-review DOI Creative Commons

Marc Welter,

Fabien Lotte

Frontiers in Neuroergonomics, Journal Year: 2024, Volume and Issue: 5

Published: Feb. 21, 2024

In today's digital information age, human exposure to visual artifacts has reached an unprecedented quasi-omnipresence. Some of these cultural are elevated the status artworks which indicates a special appreciation objects. For many persons, perception such coincides with aesthetic experiences (AE) that can positively affect health and wellbeing. AEs composed complex cognitive affective mental physiological states. More profound scientific understanding neural dynamics behind would allow development passive Brain-Computer-Interfaces (BCI) offer personalized art presentation improve AE without necessity explicit user feedback. However, previous empirical research in neuroaesthetics predominantly investigated functional Magnetic Resonance Imaging Event-Related-Potentials correlates unnaturalistic laboratory conditions might not be best features for practical neuroaesthetic BCIs. Furthermore, has, until recently, largely been framed as experience beauty or pleasantness. Yet, concepts do encompass all types AE. Thus, scope is too narrow optimal across individuals cultures. This narrative mini-review summarizes state-of-the-art oscillatory Electroencephalography (EEG) based paints road map toward ecologically valid BCI systems could optimize AEs, well their beneficial consequences. We detail reported EEG machine learning approaches classify also highlight current limitations suggest future directions decoding

Language: Английский

Citations

2

Brain Decodes Deep Nets DOI

Huzheng Yang,

James C. Gee, Jianbo Shi

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2024, Volume and Issue: 364, P. 23030 - 23040

Published: June 16, 2024

Citations

2

Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning DOI Creative Commons

M. Rehman,

Humaira Anwer, Helena Garay

et al.

Sensors, Journal Year: 2024, Volume and Issue: 24(21), P. 6965 - 6965

Published: Oct. 30, 2024

The perception and recognition of objects around us empower environmental interaction. Harnessing the brain's signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether poor accuracy in field is a result design temporal stimulation (block versus rapid event) or inherent complexity electroencephalogram (EEG) signals. Decoding perceptive signal responses subjects become increasingly complex due high noise levels nature brain activities. EEG have resolution non-stationary signals, i.e., their mean variance vary overtime. This study aims develop deep learning model for decoding subjects' rapid-event visual stimuli highlights major factors that contribute low classification task.The proposed multi-class, multi-channel integrates feature fusion handle complex, applied largest publicly available dataset consisting 40 object classes, with 1000 images each class. Contemporary state-of-the-art studies area investigating large number classes achieved maximum 17.6%. In contrast, our approach, which Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves 33.17% classes. These results demonstrate potential advancing offering future applications machine models.

Language: Английский

Citations

2

“Identifying and characterizing scene representations relevant for categorization behavior” DOI Creative Commons
Johannes Singer, Agnessa Karapetian, Martin N. Hebart

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Aug. 18, 2023

1. Abstract Scene recognition is a core sensory capacity that enables humans to adaptively interact with their environment. Despite substantial progress in the understanding of neural representations underlying scene recognition, relevance these for behavior given varying task demands remains unknown. To address this, we aimed identify behaviorally relevant representations, characterize them terms visual features, and reveal how they vary across different tasks. We recorded fMRI data while human participants viewed scenes linked brain responses three tasks acquired separate sessions: manmade/natural categorization, basic-level fixation color discrimination. found correlations between categorization response times scene-specific responses, quantified as distance hyperplane derived from multivariate classifier. Across tasks, effects were largely distinct parts ventral stream. This suggests are depending on task. Next, using deep networks proxy feature early/intermediate layers mediated relationship both indicating contribution low-/mid-level features representations. Finally, observed opposite patterns brain-behavior task, interference do not align content Together, results spatial extent, content, task-dependence mediate complex scenes.

Language: Английский

Citations

4