Modular representations emerge in neural networks trained to perform context-dependent tasks DOI Creative Commons
W. Jeffrey Johnston, Stefano Fusi

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 1, 2024

Abstract The brain has large-scale modular structure in the form of regions, which are thought to arise from constraints on connectivity and physical geometry cortical sheet. In contrast, experimental theoretical work argued both for against existence specialized sub-populations neurons (modules) within single regions. By studying artificial neural networks, we show that this local modularity emerges support context-dependent behavior, but only when input is low-dimensional. No anatomical required. We also specialization at population level (different modules correspond orthogonal subspaces). Modularity yields abstract representations, allows rapid learning generalization novel tasks, facilitates related contexts. Non-modular representations facilitate unrelated Our findings reconcile conflicting results make predictions future experiments.

Language: Английский

A neural geometry approach comprehensively explains apparently conflicting models of visual perceptual learning DOI Creative Commons
Yu-Ang Cheng, Mehdi Sanayei, Xing Chen

et al.

Nature Human Behaviour, Journal Year: 2025, Volume and Issue: unknown

Published: March 31, 2025

Abstract Visual perceptual learning (VPL), defined as long-term improvement in a visual task, is considered crucial tool for elucidating underlying and brain plasticity. Previous studies have proposed several neural models of VPL, including changes tuning or noise correlations. Here, to adjudicate different models, we propose that all at single units can be conceptualized geometric transformations population response manifolds high-dimensional space. Following this geometry approach, identified manifold shrinkage due reduced trial-by-trial variability, rather than correlation changes, the primary mechanism VPL. Furthermore, successfully explains VPL effects across artificial responses deep networks, multivariate blood-oxygenation-level-dependent signals humans multiunit activities monkeys. These converging results suggest our approach comprehensively wide range empirical reconciles previously conflicting

Language: Английский

Citations

0

Conclusions about Neural Network to Brain Alignment are Profoundly Impacted by the Similarity Measure DOI Creative Commons

Ansh Soni,

Sudhanshu Srivastava,

Konrad P. Körding

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Aug. 9, 2024

Abstract Deep neural networks are popular models of brain activity, and many studies ask which provide the best fit. To make such comparisons, papers use similarity measures as Linear Predictivity or Representational Similarity Analysis (RSA). It is often assumed that these yield comparable results, making their choice inconsequential, but it? Here we if how measure affects conclusions. We find influences layer-area correspondence well ranking models. explore choices impact prior conclusions about most “brain-like”. Our results suggest widely held regarding relative alignment different network with activity have fragile foundations.

Language: Английский

Citations

3

Universality of representation in biological and artificial neural networks DOI Creative Commons
Eghbal A. Hosseini, Colton Casto, Noga Zaslavsky

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 26, 2024

Abstract Many artificial neural networks (ANNs) trained with ecologically plausible objectives on naturalistic data align behavior and representations in biological systems. Here, we show that this alignment is a consequence of convergence onto the same by high-performing ANNs brains. We developed method to identify stimuli systematically vary degree inter-model representation agreement. Across language vision, then showed from high-and low-agreement sets predictably modulated model-to-brain alignment. also examined which stimulus features distinguish high-from sentences images. Our results establish universality as core component provide new approach for using uncover structure computations.

Language: Английский

Citations

1

Modular representations emerge in neural networks trained to perform context-dependent tasks DOI Creative Commons
W. Jeffrey Johnston, Stefano Fusi

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 1, 2024

Abstract The brain has large-scale modular structure in the form of regions, which are thought to arise from constraints on connectivity and physical geometry cortical sheet. In contrast, experimental theoretical work argued both for against existence specialized sub-populations neurons (modules) within single regions. By studying artificial neural networks, we show that this local modularity emerges support context-dependent behavior, but only when input is low-dimensional. No anatomical required. We also specialization at population level (different modules correspond orthogonal subspaces). Modularity yields abstract representations, allows rapid learning generalization novel tasks, facilitates related contexts. Non-modular representations facilitate unrelated Our findings reconcile conflicting results make predictions future experiments.

Language: Английский

Citations

0