Transients versus network interactions give rise to multistability through trapping mechanism DOI Creative Commons
Kalel L. Rossi, Everton S. Medeiros, Peter Ashwin

et al.

Chaos An Interdisciplinary Journal of Nonlinear Science, Journal Year: 2025, Volume and Issue: 35(3)

Published: March 1, 2025

In networked systems, the interplay between dynamics of individual subsystems and their network interactions has been found to generate multistability in various contexts. Despite its ubiquity, specific mechanisms ingredients that give rise from such remain poorly understood. a coupled excitable units, we demonstrate this generating occurs through competition units’ transient coupling. Specifically, diffusive coupling units reinjects them into excitability region state space, effectively trapping there. We show mechanism leads coexistence multiple types oscillations: periodic, quasi-periodic, even chaotic, although separately do not oscillate. Interestingly, find attractors emerge different bifurcations—in particular, periodic either saddle–node limit cycles bifurcations or homoclinic bifurcations—but all cases, reinjection is present.

Language: Английский

Abstract representations emerge naturally in neural networks trained to perform multiple tasks DOI Creative Commons
W. Jeffrey Johnston, Stefano Fusi

Nature Communications, Journal Year: 2023, Volume and Issue: 14(1)

Published: Feb. 23, 2023

Abstract Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts objects during natural behavior. We posit that this arises from specific representational geometry, we call abstract is referred as disentangled in machine learning. These representations have been observed recent neurophysiological studies. However, it unknown how they emerge. Here, using feedforward neural networks, the learning of multiple tasks causes emerge, both supervised reinforcement show these enable few-sample reliable generalization on novel tasks. conclude sensory cognitive variables may emerge behaviors exhibit world, and, consequence, could be pervasive high-level brain regions. also make several predictions about which will represented abstractly.

Language: Английский

Citations

43

Natural language instructions induce compositional generalization in networks of neurons DOI Creative Commons
Reidar Riveland, Alexandre Pouget

Nature Neuroscience, Journal Year: 2024, Volume and Issue: 27(5), P. 988 - 999

Published: March 18, 2024

Abstract A fundamental human cognitive feat is to interpret linguistic instructions in order perform novel tasks without explicit task experience. Yet, the neural computations that might be used accomplish this remain poorly understood. We use advances natural language processing create a model of generalization based on instructions. Models are trained set common psychophysical tasks, and receive embedded by pretrained model. Our best models can previously unseen with an average performance 83% correct solely (that is, zero-shot learning). found scaffolds sensorimotor representations such activity for interrelated shares geometry semantic instructions, allowing cue proper composition practiced skills settings. show how generates description it has identified using only motor feedback, which subsequently guide partner task. offer several experimentally testable predictions outlining information must represented facilitate flexible general cognition brain.

Language: Английский

Citations

9

Dynamical constraints on neural population activity DOI Creative Commons
Emily R. Oby, Alan D. Degenhart, Erinn M. Grigsby

et al.

Nature Neuroscience, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 17, 2025

The manner in which neural activity unfolds over time is thought to be central sensory, motor and cognitive functions the brain. Network models have long posited that brain's computations involve courses of are shaped by underlying network. A prediction from this view should difficult violate. We leveraged a brain-computer interface challenge monkeys violate naturally occurring population we observed cortex. This included challenging animals traverse natural course time-reversed manner. Animals were unable when directly challenged do so. These results provide empirical support for brain indeed reflect network-level computational mechanisms they believed implement.

Language: Английский

Citations

1

Latent circuit inference from heterogeneous neural responses during cognitive tasks DOI Creative Commons
Christopher Langdon, Tatiana A. Engel

Nature Neuroscience, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 10, 2025

Higher cortical areas carry a wide range of sensory, cognitive and motor signals mixed in heterogeneous responses single neurons tuned to multiple task variables. Dimensionality reduction methods that rely on correlations between neural activity variables leave unknown how arise from connectivity drive behavior. We develop the latent circuit model, dimensionality approach which interact via low-dimensional recurrent produce behavioral output. apply inference networks trained perform context-dependent decision-making find suppression mechanism contextual representations inhibit irrelevant sensory responses. validate this by confirming effects patterned perturbations predicted model. similar prefrontal cortex monkeys performing same task. show incorporating causal interactions among is critical for identifying behaviorally relevant computations response data.

Language: Английский

Citations

1

How to be a realist about computational neuroscience DOI
Danielle J. Williams

Synthese, Journal Year: 2025, Volume and Issue: 205(3)

Published: Feb. 21, 2025

Language: Английский

Citations

1

Trained recurrent neural networks develop phase-locked limit cycles in a working memory task DOI Creative Commons
Matthijs Pals, Jakob H. Macke, Omri Barak

et al.

PLoS Computational Biology, Journal Year: 2024, Volume and Issue: 20(2), P. e1011852 - e1011852

Published: Feb. 5, 2024

Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these is that they serve as an internal clock, or 'frame reference'. Information can be encoded by the timing neural activity relative to phase such oscillations. In line with this hypothesis, there have been multiple empirical observations codes brain. Here we ask: What kind dynamics support coding information oscillations? We tackled question analyzing recurrent networks (RNNs) were trained on a working memory task. The given access external reference oscillation and tasked produce oscillation, difference between output maintains identity transient stimuli. found converged stable oscillatory dynamics. Reverse engineering revealed each phase-coded corresponds separate limit cycle attractor. characterized how stability attractor depends both amplitude frequency, properties experimentally observed. To understand connectivity structures underlie dynamics, showed described two phase-coupled oscillators. Using insight, condensed our reduced model consisting modules: generates one implements coupling function reference. summary, reverse RNNs, propose mechanism which harness for memory. Specifically, phase-coding network autonomous it couples multi-stable fashion.

Language: Английский

Citations

8

Building compositional tasks with shared neural subspaces DOI Creative Commons
Sina Tafazoli, Flora Bouchacourt, Adel Ardalan

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: Feb. 1, 2024

Cognition is remarkably flexible; we are able to rapidly learn and perform many different tasks

Language: Английский

Citations

7

Task interference as a neuronal basis for the cost of cognitive flexibility DOI Creative Commons
Cheng Xue, Sol K. Markman,

Ruoyi Chen

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: March 6, 2024

Abstract Humans and animals have an impressive ability to juggle multiple tasks in a constantly changing environment. This flexibility, however, leads decreased performance under uncertain task conditions. Here, we combined monkey electrophysiology, human psychophysics, artificial neural network modeling investigate the neuronal mechanisms of this cost. We developed behavioural paradigm measure influence participants’ decision-making perception two distinct perceptual tasks. Our data revealed that both humans monkeys, unlike trained for same tasks, make less accurate decisions when is uncertain. generated mechanistic hypothesis by comparing produce correct choices with another replicate choices. hypothesized, confirmed further behavioural, physiological, causal experiments, cost flexibility comes from what term interference. Under conditions, interference between different causes errors because it results stronger representation irrelevant features entangled representations features. suggest tantalizing, general hypothesis: cognitive capacity limitations, health disease, stem stimuli, or memories.

Language: Английский

Citations

4

A transient high-dimensional geometry affords stable conjunctive subspaces for efficient action selection DOI Creative Commons
Atsushi Kikumoto, Apoorva Bhandari, Kazuhisa Shibata

et al.

Nature Communications, Journal Year: 2024, Volume and Issue: 15(1)

Published: Oct. 1, 2024

Language: Английский

Citations

4

Recent Advances at the Interface of Neuroscience and Artificial Neural Networks DOI Creative Commons
Yarden Cohen, Tatiana A. Engel, Christopher Langdon

et al.

Journal of Neuroscience, Journal Year: 2022, Volume and Issue: 42(45), P. 8514 - 8523

Published: Nov. 9, 2022

Biological neural networks adapt and learn in diverse behavioral contexts. Artificial (ANNs) have exploited biological properties to solve complex problems. However, despite their effectiveness for specific tasks, ANNs are yet realize the flexibility adaptability of cognition. This review highlights recent advances computational experimental research advance our understanding artificial intelligence. In particular, we discuss critical mechanisms from cellular, systems, cognitive neuroscience fields that contributed refining architecture training algorithms ANNs. Additionally, how work used understand neuronal correlates cognition process high throughput data.

Language: Английский

Citations

18