Nature Neuroscience, Journal Year: 2024, Volume and Issue: 27(9), P. 1656 - 1667
Published: July 29, 2024
Language: Английский
Nature Neuroscience, Journal Year: 2024, Volume and Issue: 27(9), P. 1656 - 1667
Published: July 29, 2024
Language: Английский
Cell, Journal Year: 2020, Volume and Issue: 183(5), P. 1249 - 1263.e23
Published: Nov. 1, 2020
The hippocampal-entorhinal system is important for spatial and relational memory tasks. We formally link these domains, provide a mechanistic understanding of the hippocampal role in generalization, offer unifying principles underlying many entorhinal cell types. propose medial cells form basis describing structural knowledge, this with sensory representations. Adopting principles, we introduce Tolman-Eichenbaum machine (TEM). After learning, TEM display diverse properties resembling apparently bespoke responses, such as grid, band, border, object-vector cells. include place landmark that remap between environments. Crucially, also aligns empirically recorded representations complex non-spatial generates predictions remapping not random previously believed; rather, knowledge preserved across confirm transfer over simultaneously grid
Language: Английский
Citations
511Trends in Cognitive Sciences, Journal Year: 2020, Volume and Issue: 25(1), P. 37 - 54
Published: Nov. 26, 2020
Language: Английский
Citations
222Neuron, Journal Year: 2020, Volume and Issue: 107(4), P. 603 - 616
Published: July 13, 2020
Language: Английский
Citations
205Nature Neuroscience, Journal Year: 2022, Volume and Issue: 25(10), P. 1257 - 1272
Published: Sept. 26, 2022
Language: Английский
Citations
140Neuron, Journal Year: 2022, Volume and Issue: 110(3), P. 394 - 422
Published: Jan. 14, 2022
Language: Английский
Citations
119Journal of Artificial Intelligence Research, Journal Year: 2022, Volume and Issue: 75, P. 1401 - 1476
Published: Dec. 22, 2022
In this article, we aim to provide a literature review of different formulations and approaches continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We begin by discussing our perspective on why RL is natural fit for studying learning. then taxonomy mathematically characterizing two key properties non-stationarity, namely, the scope driver non-stationarity. This offers unified view various formulations. Next, present approaches. go discuss evaluation agents, providing an overview benchmarks used in important metrics understanding agent performance. Finally, highlight open problems challenges bridging gap between current state findings neuroscience. While still its early days, study has promise develop better incremental learners that can function increasingly realistic applications where non-stationarity plays vital role. These include such those fields healthcare, education, logistics, robotics.
Language: Английский
Citations
117Nature reviews. Neuroscience, Journal Year: 2021, Volume and Issue: 22(8), P. 472 - 487
Published: July 6, 2021
Language: Английский
Citations
112Nature reviews. Neuroscience, Journal Year: 2022, Volume and Issue: 23(7), P. 428 - 438
Published: April 25, 2022
Language: Английский
Citations
75Current Biology, Journal Year: 2022, Volume and Issue: 32(17), P. 3676 - 3689.e5
Published: July 20, 2022
tested humans, rats, and RL agents on a novel modular maze d Humans rats were remarkably similar in their choice of trajectories Both species most to utilizing SR also displayed features model-based planning early trials
Language: Английский
Citations
71PLoS Computational Biology, Journal Year: 2024, Volume and Issue: 20(2), P. e1011801 - e1011801
Published: Feb. 8, 2024
We introduce dynamic predictive coding, a hierarchical model of spatiotemporal prediction and sequence learning in the neocortex. The assumes that higher cortical levels modulate temporal dynamics lower levels, correcting their predictions using errors. As result, form representations encode sequences at shorter timescales (e.g., single step) while longer an entire sequence). tested this two-level neural network, where top-down modulation creates low-dimensional combinations set learned to explain input sequences. When trained on natural videos, lower-level neurons developed space-time receptive fields similar those simple cells primary visual cortex higher-level responses spanned timescales, mimicking response hierarchies cortex. Additionally, network’s representation exhibited both postdictive effects resembling observed motion processing humans flash-lag illusion). coupled with associative memory emulating role hippocampus, allowed episodic memories be stored retrieved, supporting cue-triggered recall activity extended three progressively more abstract along hierarchy. Taken together, our results suggest can interpreted as coding based generative world.
Language: Английский
Citations
18