Motor functions and actions DOI
Marius Zimmermann, Angelika Lingnau

Elsevier eBooks, Год журнала: 2024, Номер unknown, С. 382 - 399

Опубликована: Авг. 1, 2024

Язык: Английский

Action Understanding DOI
Angelika Lingnau, Paul E. Downing

Опубликована: Апрель 10, 2024

The human ability to effortlessly understand the actions of other people has been focus research in cognitive neuroscience for decades. What have we learned about this ability, and what open questions remain? In Element authors address these by considering kinds information an observer may gain when viewing action. A 'what, how, why' framing organises evidence theories representations that support classifying action; how way action is performed supports observational learning inferences people; actor's intentions are inferred from her actions. Further shows brain systems understanding, inspired 'mirror neurons' related concepts. Understanding vision a multi-faceted process serves many behavioural goals, served diverse mechanisms systems.

Язык: Английский

Процитировано

10

Overlapping representations of observed actions and action‐related features DOI Creative Commons
Zuzanna Kabulska, Tonghe Zhuang, Angelika Lingnau

и другие.

Human Brain Mapping, Год журнала: 2024, Номер 45(3)

Опубликована: Фев. 15, 2024

The lateral occipitotemporal cortex (LOTC) has been shown to capture the representational structure of a smaller range actions. In current study, we carried out an fMRI experiment in which presented human participants with images depicting 100 different actions and used similarity analysis (RSA) determine brain regions semantic action space established using judgments similarity. Moreover, contribution wide action-related features neural representation constructed feature model on basis ratings 44 features. We found that are best captured by overlapping activation patterns bilateral LOTC ventral (VOTC). An RSA eight dimensions resulting from principal component revealed partly representations within LOTC, VOTC, parietal lobe. Our results suggest spatially corresponding Together, our add understanding kind along support understanding.

Язык: Английский

Процитировано

3

Low-frequency neural activity tracks syntactic information through semantic mediation DOI
Yuan Xie, Peng Zhou, Likan Zhan

и другие.

Brain and Language, Год журнала: 2025, Номер 261, С. 105532 - 105532

Опубликована: Янв. 8, 2025

Язык: Английский

Процитировано

0

Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm DOI Creative Commons

Maximilian Reger,

Oleg Vrabie,

Gregor Volberg

и другие.

Cognitive Affective & Behavioral Neuroscience, Год журнала: 2025, Номер unknown

Опубликована: Фев. 26, 2025

Abstract Being able to quickly recognize other people’s actions lies at the heart of our ability efficiently interact with environment. Action recognition has been suggested rely on analysis and integration information from different perceptual subsystems, e.g., for processing objects scenes. However, stimulus presentation times that are required extract about actions, objects, scenes knowledge have not yet directly compared. To address this gap in literature, we compared thresholds First, 30 participants were presented grayscale images depicting variable (33–500 ms) provided written descriptions each image. Next, ten naïve raters evaluated these respect presence accuracy related scenes, sensory information. Comparing across times, found recognizing shorter (from 60 ms onwards) than (68 (84 ms). More specific approximately 100 ms. Moreover, modulated by action category, lowest locomotion highest food-related actions. Together, data suggest evidence is gathered parallel when same scene but accumulates faster reflect static body posture

Язык: Английский

Процитировано

0

Shared representations of human actions across vision and language DOI Creative Commons
Diana C. Dima,

Sugitha Janarthanan,

Jody C. Culham

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2023, Номер unknown

Опубликована: Ноя. 6, 2023

Abstract Humans can recognize and communicate about many actions performed by others. How are organized in the mind, is this organization shared across vision language? We collected similarity judgments of human depicted through naturalistic videos sentences, tested four models action categorization, defining at different levels abstraction ranging from specific (action verb) to broad target: whether an directed towards object, another person, or self). The reflected a representations determined mainly target actions, even after accounting for other semantic features. Language model embeddings predicted behavioral captured information alongside unique information. Together, our results show how concepts mind large language representations.

Язык: Английский

Процитировано

1

Shared representations of human actions across vision and language DOI Creative Commons
Diana C. Dima,

Sugitha Janarthanan,

Jody C. Culham

и другие.

Neuropsychologia, Год журнала: 2024, Номер 202, С. 108962 - 108962

Опубликована: Июль 22, 2024

Humans can recognize and communicate about many actions performed by others. How are organized in the mind, is this organization shared across vision language? We collected similarity judgments of human depicted through naturalistic videos sentences, tested four models action categorization, defining at different levels abstraction ranging from specific (action verb) to broad target: whether an directed towards object, another person, or self). The reflected a representations determined mainly target actions, even after accounting for other semantic features. Furthermore, language model embeddings predicted behavioral captured information alongside unique information. Together, our results show that concepts similarly mind language, reflects socially relevant goals.

Язык: Английский

Процитировано

0

Motor functions and actions DOI
Marius Zimmermann, Angelika Lingnau

Elsevier eBooks, Год журнала: 2024, Номер unknown, С. 382 - 399

Опубликована: Авг. 1, 2024

Язык: Английский

Процитировано

0