The perceptual primacy of feeling: Affectless visual machines explain a majority of variance in human visually evoked affect DOI Creative Commons
Colin Conwell, Daniel W. Graham, Chelsea Boccagno

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2025, Volume and Issue: 122(4)

Published: Jan. 23, 2025

Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive in absence of active physiology, deliberative thought, or any form feedback resembles human affective experience offer tools demystify relationship between and feeling, assess how much visually evoked experiences may be a straightforward function representation learning over natural image statistics. In this work, we deploy diverse sample 180 state-of-the-art deep neural network models trained only on canonical computer tasks predict ratings arousal, valence, beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets. Importantly, use features these without additional learning, linearly decoding responses activity same way neuroscientists decode information recordings. Aggregate analysis our survey, demonstrates predictions purely perceptual explain majority explainable variance average alike. Finer-grained within survey (e.g. comparisons shallower deeper layers, randomly initialized, category-supervised, self-supervised models) point rich, preconceptual abstraction (learned diversity visual experience) as key driver predictions. Taken together, results provide further computational evidence an information-processing account affect linked directly efficient statistics, hint locus aesthetic valuation immediately proximate perception.

Language: Английский

The perceptual primacy of feeling: Affectless visual machines explain a majority of variance in human visually evoked affect DOI Creative Commons
Colin Conwell, Daniel W. Graham, Chelsea Boccagno

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2025, Volume and Issue: 122(4)

Published: Jan. 23, 2025

Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive in absence of active physiology, deliberative thought, or any form feedback resembles human affective experience offer tools demystify relationship between and feeling, assess how much visually evoked experiences may be a straightforward function representation learning over natural image statistics. In this work, we deploy diverse sample 180 state-of-the-art deep neural network models trained only on canonical computer tasks predict ratings arousal, valence, beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets. Importantly, use features these without additional learning, linearly decoding responses activity same way neuroscientists decode information recordings. Aggregate analysis our survey, demonstrates predictions purely perceptual explain majority explainable variance average alike. Finer-grained within survey (e.g. comparisons shallower deeper layers, randomly initialized, category-supervised, self-supervised models) point rich, preconceptual abstraction (learned diversity visual experience) as key driver predictions. Taken together, results provide further computational evidence an information-processing account affect linked directly efficient statistics, hint locus aesthetic valuation immediately proximate perception.

Language: Английский

Citations

0