Feedforward amplification in recurrent networks underlies paradoxical neural coding DOI Open Access
Kayvon Daie, Lorenzo Fontolan, Shaul Druckmann

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: Aug. 6, 2023

The activity of single neurons encodes behavioral variables, such as sensory stimuli (Hubel & Wiesel 1959) and choice (Britten et al. 1992; Guo 2014), but their influence on behavior is often mysterious. We estimated the a unit neural from recordings in anterior lateral motor cortex (ALM) mice performing memory-guided movement task (H. K. Inagaki 2018). Choice selectivity grew it flowed through sequence directions space. Early carried little were predicted to have large influence, while late influence. Consequently, was only weakly correlated with selectivity; proportion selective for one opposite direction. These results consistent models which recurrent circuits produce feedforward amplification (Goldman 2009; Ganguli 2008; Murphy Miller 2009) so that small amplitude signals along early are amplified low-dimensional directions, behavior. Targeted photostimulation experiments (Daie 2021b) revealed triggered sequential later caused predictable biases. demonstrate existence an amplifying dynamical motif cortex, explain paradoxical responses perturbation (Chettih Harvey 2019; Daie 2021b; Russell 2019), reveal relevance dynamics.

Language: Английский

The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep DOI
Rishidev Chaudhuri, Berk Gerçek, Biraj Pandey

et al.

Nature Neuroscience, Journal Year: 2019, Volume and Issue: 22(9), P. 1512 - 1520

Published: Aug. 12, 2019

Language: Английский

Citations

336

Geometry of abstract learned knowledge in the hippocampus DOI

Edward H. Nieh,

Manuel Schottdorf,

Nicolas W. Freeman

et al.

Nature, Journal Year: 2021, Volume and Issue: 595(7865), P. 80 - 84

Published: June 16, 2021

Language: Английский

Citations

263

Neural population geometry: An approach for understanding biological and artificial neural networks DOI Creative Commons
SueYeon Chung,

L. F. Abbott

Current Opinion in Neurobiology, Journal Year: 2021, Volume and Issue: 70, P. 137 - 144

Published: Oct. 1, 2021

Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At same time, advances machine learning unleashed remarkable computational power artificial networks (ANNs). While these two fields different tools applications, they present a similar challenge: namely, understanding how information is embedded processed through high-dimensional representations solve complex tasks. One approach addressing this challenge utilize mathematical analyze geometry representations, i.e., population geometry. We review examples geometrical approaches providing insight into biological networks: representation untangling perception, geometric theory classification capacity, disentanglement abstraction cognitive systems, topological underlying maps, dynamic motor dynamical cognition. Together, findings illustrate an exciting trend at intersection learning, neuroscience, geometry, which provides useful population-level mechanistic descriptor task implementation. Importantly, descriptions are applicable across sensory modalities, brain regions, network architectures timescales. Thus, has potential unify networks, bridging gap between single neurons, populations behavior.

Language: Английский

Citations

201

Attractor and integrator networks in the brain DOI

Mikail Khona,

Ila Fiete

Nature reviews. Neuroscience, Journal Year: 2022, Volume and Issue: 23(12), P. 744 - 766

Published: Nov. 3, 2022

Language: Английский

Citations

189

The population doctrine in cognitive neuroscience DOI Creative Commons
R. Becket Ebitz, Benjamin Y. Hayden

Neuron, Journal Year: 2021, Volume and Issue: 109(19), P. 3055 - 3068

Published: Aug. 19, 2021

Language: Английский

Citations

179

Neural tuning and representational geometry DOI
Nikolaus Kriegeskorte, Xue-Xin Wei

Nature reviews. Neuroscience, Journal Year: 2021, Volume and Issue: 22(11), P. 703 - 718

Published: Sept. 14, 2021

Language: Английский

Citations

146

Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity DOI Creative Commons
Mehrdad Jazayeri, Srdjan Ostojic

Current Opinion in Neurobiology, Journal Year: 2021, Volume and Issue: 70, P. 113 - 120

Published: Sept. 17, 2021

Language: Английский

Citations

141

A unifying perspective on neural manifolds and circuits for cognition DOI
Christopher Langdon, Mikhail Genkin, Tatiana A. Engel

et al.

Nature reviews. Neuroscience, Journal Year: 2023, Volume and Issue: 24(6), P. 363 - 377

Published: April 13, 2023

Language: Английский

Citations

118

Learning Structures: Predictive Representations, Replay, and Generalization DOI
Ida Momennejad

Current Opinion in Behavioral Sciences, Journal Year: 2020, Volume and Issue: 32, P. 155 - 166

Published: April 1, 2020

Language: Английский

Citations

138

Predictive learning as a network mechanism for extracting low-dimensional latent space representations DOI Creative Commons
Stefano Recanatesi, Matthew Farrell, Guillaume Lajoie

et al.

Nature Communications, Journal Year: 2021, Volume and Issue: 12(1)

Published: March 3, 2021

Abstract Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of task’s low-dimensional latent structure network activity – i.e., learned representations. Here, we investigate hypothesis that a means for generating representations with easily accessed structure, possibly reflecting an underlying semantic organization, through learning predict observations about world. Specifically, ask whether when mechanisms sensory prediction coincide those extracting variables. Using recurrent model trained sequence show dynamics exhibit but nonlinearly transformed inputs map environment. We quantify these results using nonlinear measures intrinsic dimensionality linear decodability variables, provide mathematical arguments why such useful predictive emerge. focus throughout on how our can aid analysis interpretation experimental data.

Language: Английский

Citations

63