Long- and short-term history effects in a spiking network model of statistical learning DOI Creative Commons
Amadeus Maes, Mauricio Barahona, Claudia Clopath

et al.

Scientific Reports, Journal Year: 2023, Volume and Issue: 13(1)

Published: Aug. 9, 2023

Abstract The statistical structure of the environment is often important when making decisions. There are multiple theories how brain represents structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, network spends more time in which encode high-probability stimuli. Starting assembly, increasingly thought to be building block for computation brain, we focus on arbitrary prior knowledge about external world can both learned and recollected. We present a model based upon learning inverse cumulative distribution function. Learning entirely unsupervised using biophysical neurons biologically plausible rules. show this then accessed compute expectations signal surprise downstream networks. Sensory history effects emerge as consequence ongoing learning.

Language: Английский

Amplified cortical neural responses as animals learn to use novel activity patterns DOI Creative Commons
Bradley Akitake, Hannah M. Douglas, Paul K. LaFosse

et al.

Current Biology, Journal Year: 2023, Volume and Issue: 33(11), P. 2163 - 2174.e4

Published: May 5, 2023

Language: Английский

Citations

8

Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network DOI Creative Commons

Ikhwan Jeon,

Taegon Kim

Frontiers in Computational Neuroscience, Journal Year: 2023, Volume and Issue: 17

Published: June 28, 2023

Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information diverse features neurons, synapses, circuits into AI. In review, we described recent attempts build biologically plausible network following neuroscientifically similar strategies optimization or implanting outcome optimization, such as properties single computational units characteristics architecture. addition, proposed formalism relationship between set objectives that attempt achieve, classes categorized how closely their architectural resemble those BNN. This expected define potential roles top-down approaches offer map helping navigation gap AI engineering.

Language: Английский

Citations

8

Microstimulation of sensory cortex engages natural sensory representations DOI
Ravi Pancholi,

Andrew Sun-Yan,

Simon Peron

et al.

Current Biology, Journal Year: 2023, Volume and Issue: 33(9), P. 1765 - 1777.e5

Published: May 1, 2023

Language: Английский

Citations

7

Mapping memories: pulse-chase labeling reveals AMPA receptor dynamics during memory formation DOI Creative Commons
Doyeon Kim, Pojeong Park, Xiuyuan Li

et al.

bioRxiv (Cold Spring Harbor Laboratory), Journal Year: 2023, Volume and Issue: unknown

Published: May 26, 2023

A tool to map changes in synaptic strength during a defined time window could provide powerful insights into the mechanisms governing learning and memory. We developed technique, Extracellular Protein Surface Labeling Neurons (EPSILON), α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) insertion vivo by pulse-chase labeling of surface AMPARs with membrane-impermeable dyes. This approach allows for single-synapse resolution maps plasticity genetically targeted neurons memory formation. investigated relationship between synapse-level cell-level encodings mapping cFos expression hippocampal CA1 pyramidal cells upon contextual fear conditioning (CFC). observed strong correlation expression, suggesting mechanism association engrams. The EPSILON technique is useful may be extended investigate trafficking other transmembrane proteins.

Language: Английский

Citations

7

Long- and short-term history effects in a spiking network model of statistical learning DOI Creative Commons
Amadeus Maes, Mauricio Barahona, Claudia Clopath

et al.

Scientific Reports, Journal Year: 2023, Volume and Issue: 13(1)

Published: Aug. 9, 2023

Abstract The statistical structure of the environment is often important when making decisions. There are multiple theories how brain represents structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, network spends more time in which encode high-probability stimuli. Starting assembly, increasingly thought to be building block for computation brain, we focus on arbitrary prior knowledge about external world can both learned and recollected. We present a model based upon learning inverse cumulative distribution function. Learning entirely unsupervised using biophysical neurons biologically plausible rules. show this then accessed compute expectations signal surprise downstream networks. Sensory history effects emerge as consequence ongoing learning.

Language: Английский

Citations

5