Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding DOI Creative Commons
Ching Fang, Jack Lindsey,

L. F. Abbott

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2024, Номер unknown

Опубликована: Сен. 13, 2024

Abstract Forming an episodic memory requires binding together disparate elements that co-occur in a single experience. One model of this process is neurons representing different components bind to “index” — subset unique memory. Evidence for has recently been found chickadees, which use hippocampal store and recall locations cached food. Chickadee hippocampus produces sparse, high-dimensional patterns (“barcodes”) uniquely specify each caching event. Unexpectedly, the same participate barcodes also exhibit conventional place tuning. It unknown how barcode activity generated, what role it plays formation retrieval. unclear index (e.g. barcodes) could function neural population represents content place). Here, we design biologically plausible generates uses them experiential content. Our from inputs through chaotic dynamics recurrent network Hebbian plasticity as attractor states. The matches experimental observations indices (barcodes) signals (place tuning) are randomly intermixed neurons. We demonstrate reduce interference between correlated experiences. show tuning complementary barcodes, enabling flexible, contextually-appropriate Finally, our compatible with previous models generating predictive map. Distinct indexing functions achieved via adjustment global gain. results suggest may resolve fundamental tensions specificity (pattern separation) flexible completion) general systems.

Язык: Английский

Learning produces an orthogonalized state machine in the hippocampus DOI Creative Commons
Weinan Sun, Johan Winnubst, Maanasa Natrajan

и другие.

Nature, Год журнала: 2025, Номер unknown

Опубликована: Фев. 12, 2025

Abstract Cognitive maps confer animals with flexible intelligence by representing spatial, temporal and abstract relationships that can be used to shape thought, planning behaviour. have been observed in the hippocampus 1 , but their algorithmic form learning mechanisms remain obscure. Here we large-scale, longitudinal two-photon calcium imaging record activity from thousands of neurons CA1 region while mice learned efficiently collect rewards two subtly different linear tracks virtual reality. Throughout learning, both animal behaviour hippocampal neural progressed through multiple stages, gradually revealing improved task representation mirrored behavioural efficiency. The process involved progressive decorrelations initially similar within across tracks, ultimately resulting orthogonalized representations resembling a state machine capturing inherent structure task. This decorrelation was driven individual acquiring task-state-specific responses (that is, ‘state cells’). Although various standard artificial networks did not naturally capture these dynamics, clone-structured causal graph, hidden Markov model variant, uniquely reproduced final states trajectory seen animals. cellular population dynamics constrain underlying cognitive map formation hippocampus, pointing inference as fundamental computational principle, implications for biological intelligence.

Язык: Английский

Процитировано

4

Rapid learning of predictive maps with STDP and theta phase precession DOI Creative Commons
Tom M George, William de Cothi, Kimberly Stachenfeld

и другие.

eLife, Год журнала: 2023, Номер 12

Опубликована: Март 16, 2023

The predictive map hypothesis is a promising candidate principle for hippocampal function. A favoured formalisation of this hypothesis, called the successor representation, proposes that each place cell encodes expected state occupancy its target location in near future. This framework supported by behavioural as well electrophysiological evidence and has desirable consequences both generalisability efficiency reinforcement learning algorithms. However, it unclear how representation might be learnt brain. Error-driven temporal difference learning, commonly used to learn representations artificial agents, not known implemented networks. Instead, we demonstrate spike-timing dependent plasticity (STDP), form Hebbian acting on temporally compressed trajectories 'theta sweeps', sufficient rapidly close approximation representation. model biologically plausible - uses spiking neurons modulated theta-band oscillations, diffuse overlapping cell-like representations, experimentally matched parameters. We show maps onto aspects circuitry explains substantial variance matrix, consequently giving rise cells observed representation-related phenomena including backwards expansion 1D track elongation walls 2D. Finally, our provides insight into topographical ordering field sizes along dorsal-ventral axis showing necessary prevent detrimental mixing larger fields, which encode longer timescale with more fine-grained predictions spatial location.

Язык: Английский

Процитировано

40

Learning predictive cognitive maps with spiking neurons during behavior and replays DOI Creative Commons
Jacopo Bono, Sara Zannone, Victor Pedrosa

и другие.

eLife, Год журнала: 2023, Номер 12

Опубликована: Март 16, 2023

The hippocampus has been proposed to encode environments using a representation that contains predictive information about likely future states, called the successor representation. However, it is not clear how such could be learned in hippocampal circuit. Here, we propose plasticity rule can learn this map of environment spiking neural network. We connect biologically plausible reinforcement learning, mathematically and numerically showing implements TD-lambda algorithm. By spanning these different levels, show our framework naturally encompasses behavioral activity replays, smoothly moving from rate temporal coding, allows learning over timescales with acting on timescale milliseconds. discuss biological parameters as dwelling times at neuronal firing rates neuromodulation relate delay discounting parameter TD algorithm, they influence also find that, agreement psychological studies contrary theory, discount factor decreases hyperbolically time. Finally, suggests role for both aiding novel finding shortcut trajectories were experienced during behavior, experimental data.

Язык: Английский

Процитировано

24

Sequential predictive learning is a unifying theory for hippocampal representation and replay DOI
Daniel Levenstein, Aleksei Efremov, Roy Henha Eyono

и другие.

Опубликована: Апрель 29, 2024

Abstract The mammalian hippocampus contains a cognitive map that represents an animal’s position in the environment 1 and generates offline “replay” 2,3 for purposes of recall 4 , planning 5,6 forming long term memories 7 . Recently, it’s been found artificial neural networks trained to predict sensory inputs develop spatially tuned cells 8 aligning with predictive theories hippocampal function 9–11 However, whether learning can also account ability produce replay is unknown. Here, we find spatially-tuned cells, which robustly emerge from all forms learning, do not guarantee presence generate replay. Offline simulations only emerged used recurrent connections head-direction information multi-step observation sequences, promoted formation continuous attractor reflecting geometry environment. These trajectories were able show wake-like statistics, autonomously recently experienced locations, could be directed by virtual head direction signal. Further, make cyclical predictions future sequences rapidly learn produced sweeping representations positions reminiscent theta sweeps 12 results demonstrate how hippocampal-like representation engaged suggest reflect circuit implements data-efficient algorithm sequential learning. Together, this framework provides unifying theory functions hippocampal-inspired approaches intelligence.

Язык: Английский

Процитировано

11

Interactions between circuit architecture and plasticity in a closed-loop cerebellar system DOI Creative Commons
Hannah L. Payne, Jennifer L Raymond, Mark S. Goldman

и другие.

eLife, Год журнала: 2024, Номер 13

Опубликована: Март 7, 2024

Determining the sites and directions of plasticity underlying changes in neural activity behavior is critical for understanding mechanisms learning. Identifying such from recording data can be challenging due to feedback pathways that impede reasoning about cause effect. We studied interactions between feedback, activity, context a closed-loop motor learning task which there disagreement loci plasticity: vestibulo-ocular reflex constructed set circuit models differed strength their recurrent no very strong feedback. Despite these differences, each model successfully fit large behavioral data. However, patterns predicted by fundamentally differed, with direction at key site changing depression potentiation as increased. Guided our analysis, we suggest how experimentally disambiguated. Our results address long-standing debate regarding cerebellum-dependent learning, suggesting reconciliation learning-related synaptic inputs Purkinje cells are compatible seemingly oppositely directed cell spiking activity. More broadly, demonstrate over appear contradict sign when either internal or through environment present.

Язык: Английский

Процитировано

7

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine DOI Creative Commons
Weinan Sun, Johan Winnubst, Maanasa Natrajan

и другие.

bioRxiv (Cold Spring Harbor Laboratory), Год журнала: 2023, Номер unknown

Опубликована: Авг. 6, 2023

ABSTRACT Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, behavior. have been observed in the hippocampus, but their algorithmic form processes which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging record activity from thousands of neurons CA1 region hippocampus while mice efficiently collect rewards two subtly different versions linear tracks virtual reality. The results provide a detailed view formation cognitive map hippocampus. Throughout learning, both animal behavior hippocampal neural progressed through multiple intermediate stages, gradually revealing improved task representation mirrored behavioral efficiency. learning process led progressive decorrelations initially similar within across tracks, ultimately resulting orthogonalized representations resembling state machine capturing inherent structure task. We show Hidden Markov Model (HMM) biologically plausible recurrent network trained using Hebbian capture core aspects dynamics representational activity. In contrast, gradient-based sequence models such as Long Short-Term Memory networks (LSTMs) Transformers do not naturally produce representations. further demonstrate exhibited adaptive novel settings, reflecting deployment machine. These findings shed light on mathematical maps, rules sculpt them, algorithms promote animals. work thus charts course toward deeper understanding biological offers insights developing more robust artificial intelligence.

Язык: Английский

Процитировано

16

Predictive sequence learning in the hippocampal formation DOI
Yusi Chen,

Huanqiu Zhang,

Mia Cameron

и другие.

Neuron, Год журнала: 2024, Номер 112(15), С. 2645 - 2658.e4

Опубликована: Июнь 24, 2024

Язык: Английский

Процитировано

6

Local prediction-learning in high-dimensional spaces enables neural networks to plan DOI Creative Commons
Christoph Stöckl, Yukun Yang, Wolfgang Maass

и другие.

Nature Communications, Год журнала: 2024, Номер 15(1)

Опубликована: Март 15, 2024

Planning and problem solving are cornerstones of higher brain function. But we do not know how the does that. We show that learning a suitable cognitive map space suffices. Furthermore, this can be reduced to predict next observation through local synaptic plasticity. Importantly, resulting encodes relations between actions observations, its emergent high-dimensional geometry provides sense direction for reaching distant goals. This quasi-Euclidean simple heuristic online planning works almost as well best offline algorithms from AI. If is physical space, method automatically extracts structural regularities sequence observations it receives so generalize unseen parts. speeds up navigation in 2D mazes locomotion with complex actuator systems, such legged bodies. The learner propose require teacher, similar self-attention networks (Transformers). contrast Transformers, backpropagation errors or very large datasets learning. Hence blue-print future energy-efficient neuromorphic hardware acquires advanced capabilities autonomous on-chip

Язык: Английский

Процитировано

5

Abstract cognitive maps of social network structure aid adaptive inference DOI Creative Commons
Jae-Young Son, Apoorva Bhandari, Oriel FeldmanHall

и другие.

Proceedings of the National Academy of Sciences, Год журнала: 2023, Номер 120(47)

Опубликована: Ноя. 14, 2023

Social navigation-such as anticipating where gossip may spread, or identifying which acquaintances can help land a job-relies on knowing how people are connected within their larger social communities. Problematically, for most networks, the space of possible relationships is too vast to observe and memorize. Indeed, people's knowledge these relations well known be biased error-prone. Here, we reveal that representations reflect fundamental computation abstracts over individual enable principled inferences about unseen relationships. We propose theory network representation explains learn inferential cognitive maps from direct observation, what kinds structures emerge consequence, why it beneficial encode systematic biases into maps. Leveraging simulations, laboratory experiments, "field data" real-world network, find abstract observations (e.g., friends) multistep friends-of-friends). This abstraction mechanism enables discover represent complex structure, affording adaptive across variety contexts, including friendship, trust, advice-giving. Moreover, this unifies otherwise puzzling empirical behavior. Our proposal generalizes computational problem inference, presenting powerful framework understanding workings predictive mind operating world.

Язык: Английский

Процитировано

11

Endotaxis: A neuromorphic algorithm for mapping, goal-learning, navigation, and patrolling DOI Creative Commons
Tony Zhang, Matthew Rosenberg, Zeyu Jing

и другие.

eLife, Год журнала: 2024, Номер 12

Опубликована: Фев. 29, 2024

An animal entering a new environment typically faces three challenges: explore the space for resources, memorize their locations, and navigate towards those targets as needed. Here we propose neural algorithm that can solve all these problems operates reliably in diverse complex environments. At its core, mechanism makes use of behavioral module common to motile animals, namely ability follow an odor source. We show how brain learn generate internal “virtual odors” guide any location interest. This endotaxis be implemented with simple 3-layer circuit using only biologically realistic structures learning rules. Several components this scheme are found brains from insects humans. Nature may have evolved general search navigation on ancient backbone chemotaxis.

Язык: Английский

Процитировано

4