A Review of Neuroscience-Inspired Machine Learning DOI Open Access
Alexander G. Ororbia, Ankur Mali, Adam Kohan

et al.

Published: March 20, 2024

One major criticism of deep learning centers around the biological implausibility credit assignment schema used for -- backpropagation errors. This translates into practical limitations, spanning scientific fields, including incompatibility with hardware and non-differentiable implementations, thus leading to expensive energy requirements. In contrast, biologically plausible is compatible practically any condition energy-efficient. As a result, it accommodates modeling, e.g. physical systems behavior. Furthermore, can lead development real-time, adaptive neuromorphic processing systems. addressing this problem, an interdisciplinary branch artificial intelligence research that lies at intersection neuroscience, cognitive science, machine has emerged. paper, we survey several vital algorithms model bio-plausible rules in neural networks, discussing solutions they provide different fields as well their advantages on CPUs, GPUs, novel implementations hardware. We conclude by future challenges will need be addressed order make such more useful applications.

Language: Английский

A hierarchy of linguistic predictions during natural language comprehension DOI Creative Commons
Micha Heilbron, Kristijan Armeni, Jan‐Mathijs Schoffelen

et al.

Proceedings of the National Academy of Sciences, Journal Year: 2022, Volume and Issue: 119(32)

Published: Aug. 3, 2022

Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction guide interpretation incoming input. However, role in processing remains disputed, with disagreement about both ubiquity and representational nature predictions. Here, we address issues by analyzing recordings participants listening audiobooks, using deep neural network (GPT-2) precisely quantify contextual First, establish responses words are modulated ubiquitous Next, disentangle model-based predictions distinct dimensions, revealing dissociable signatures syntactic category (parts speech), phonemes, semantics. Finally, show high-level (word) inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore processing, showing spontaneously predicts upcoming at multiple levels abstraction.

Language: Английский

Citations

230

Inferring neural activity before plasticity as a foundation for learning beyond backpropagation DOI Creative Commons
Yuhang Song, Beren Millidge, Tommaso Salvatori

et al.

Nature Neuroscience, Journal Year: 2024, Volume and Issue: 27(2), P. 348 - 358

Published: Jan. 3, 2024

Abstract For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error output, a challenge that known as ‘credit assignment’. It has long been assumed credit assignment best solved by backpropagation, also foundation modern machine learning. Here, we set out fundamentally different principle on called ‘prospective configuration’. In prospective configuration, network first infers pattern neural activity should result from learning, then synaptic weights modified consolidate change activity. We demonstrate this distinct mechanism, contrast (1) underlies well-established family models cortical circuits, (2) enables more efficient effective many contexts faced biological organisms (3) reproduces surprising patterns behavior observed diverse human rat experiments.

Language: Английский

Citations

31

A predictive coding model of the N400 DOI

Samer Nour Eddine,

Trevor Brothers, Lin Wang

et al.

Cognition, Journal Year: 2024, Volume and Issue: 246, P. 105755 - 105755

Published: Feb. 29, 2024

Language: Английский

Citations

18

Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research DOI Creative Commons
Tom Macpherson, Anne K. Churchland,

Terry Sejnowski

et al.

Neural Networks, Journal Year: 2021, Volume and Issue: 144, P. 603 - 613

Published: Sept. 28, 2021

Neuroscience and artificial intelligence (AI) share a long history of collaboration. Advances in neuroscience, alongside huge leaps computer processing power over the last few decades, have given rise to new generation silico neural networks inspired by architecture brain. These AI systems are now capable many advanced perceptual cognitive abilities biological systems, including object recognition decision making. Moreover, is increasingly being employed as tool for neuroscience research transforming our understanding brain functions. In particular, deep learning has been used model how convolutional layers recurrent connections brain's cerebral cortex control important functions, visual processing, memory, motor control. Excitingly, use neuroscience-inspired also holds great promise changes result psychopathologies, could even be utilized treatment regimes. Here we discuss recent advancements four areas which relationship between led major field; (1) models working (2) (3) analysis big datasets, (4) computational psychiatry.

Language: Английский

Citations

104

How particular is the physics of the free energy principle? DOI Creative Commons
Miguel Aguilera, Beren Millidge, Alexander Tschantz

et al.

Physics of Life Reviews, Journal Year: 2021, Volume and Issue: 40, P. 24 - 50

Published: Nov. 23, 2021

The free energy principle (FEP) states that any dynamical system can be interpreted as performing Bayesian inference upon its surrounding environment. Although, in theory, the FEP applies to a wide variety of systems, there has been almost no direct exploration or demonstration concrete systems. In this work, we examine depth assumptions required derive simplest possible set systems – weakly-coupled non-equilibrium linear stochastic Specifically, explore (i) how general requirements imposed on statistical structure are and (ii) informative is about behaviour such We discover two Markov blanket condition (i.e. boundary precluding coupling between internal external states) stringent restrictions solenoidal flows tendencies driving out equilibrium) only valid for very narrow space parameters. Suitable require an absence perception-action asymmetries highly unusual living interacting with More importantly, observe mathematically central step argument, connecting variational inference, relies implicit equivalence dynamics average those states. This does not hold even since it requires effective decoupling from system's history interactions. These observations critical evaluating generality applicability indicate existence significant problems theory current form. issues make FEP, stands, straightforwardly applicable simple studied here suggest more development needed before could applied kind complex describe cognitive processes.

Language: Английский

Citations

71

Prediction-error neurons in circuits with multiple neuron types: Formation, refinement, and functional implications DOI Creative Commons
Loreen Hertäg, Claudia Clopath

Proceedings of the National Academy of Sciences, Journal Year: 2022, Volume and Issue: 119(13)

Published: March 23, 2022

Significance An influential idea in neuroscience is that neural circuits do not only passively process sensory information but rather actively compare them with predictions thereof. A core element of this comparison prediction-error neurons, the activity which changes upon mismatches between actual and predicted stimuli. While it has been shown these neurons come different variants, largely unresolved how they are simultaneously formed shaped by highly interconnected networks. By using a computational model, we study circuit-level mechanisms give rise to variants neurons. Our results shed light on formation, refinement, robustness circuits, an important step toward better understanding predictive processing.

Language: Английский

Citations

47

Hybrid predictive coding: Inferring, fast and slow DOI Creative Commons

Alexander Tscshantz,

Beren Millidge, Anil K. Seth

et al.

PLoS Computational Biology, Journal Year: 2023, Volume and Issue: 19(8), P. e1011280 - e1011280

Published: Aug. 2, 2023

Predictive coding is an influential model of cortical neural activity. It proposes that perceptual beliefs are furnished by sequentially minimising “prediction errors”—the differences between predicted and observed data. Implicit in this proposal the idea successful perception requires multiple cycles This at odds with evidence several aspects visual perception—including complex forms object recognition—arise from initial “feedforward sweep” occurs on fast timescales which preclude substantial recurrent Here, we propose feedforward sweep can be understood as performing amortized inference (applying a learned function maps directly data to beliefs) processing iterative (sequentially updating activity order improve accuracy beliefs). We hybrid predictive network combines both principled manner describing terms dual optimization single objective function. show resulting scheme implemented biologically plausible architecture approximates Bayesian utilising local Hebbian update rules. demonstrate our benefits inference—obtaining rapid computationally cheap for familiar while maintaining context-sensitivity, precision, sample efficiency schemes. Moreover, how inherently sensitive its uncertainty adaptively balances obtain accurate using minimum computational expense. Hybrid offers new perspective functional relevance during novel insights into distinct phenomenology.

Language: Английский

Citations

31

Predictive coding networks for temporal prediction DOI Creative Commons
Beren Millidge, Mufeng Tang, Mahyar Osanlouy

et al.

PLoS Computational Biology, Journal Year: 2024, Volume and Issue: 20(4), P. e1011183 - e1011183

Published: April 1, 2024

One of the key problems brain faces is inferring state world from a sequence dynamically changing stimuli, and it not yet clear how sensory system achieves this task. A well-established computational framework for describing perceptual processes in provided by theory predictive coding. Although original proposals coding have discussed temporal prediction, later work developing mostly focused on static questions neural implementation properties networks remain open. Here, we address these present formulation model that can be naturally implemented recurrent networks, which activity dynamics rely only local inputs to neurons, learning utilises Hebbian plasticity. Additionally, show approximate performance Kalman filter predicting behaviour linear systems, behave as variant does track its own subjective posterior variance. Importantly, achieve similar accuracy without performing complex mathematical operations, but just employing simple computations biological networks. Moreover, when trained with natural dynamic inputs, found produce Gabor-like, motion-sensitive receptive fields resembling those observed real neurons visual areas. In addition, demonstrate effectively generalized nonlinear systems. Overall, models presented paper biologically plausible circuits predict future stimuli may guide research understanding specific areas involved prediction.

Language: Английский

Citations

9

The calcitron: A simple neuron model that implements many learning rules via the calcium control hypothesis DOI Creative Commons
Toviah Moldwin,

Li Shay Azran,

Idan Segev

et al.

PLoS Computational Biology, Journal Year: 2025, Volume and Issue: 21(1), P. e1012754 - e1012754

Published: Jan. 29, 2025

Theoretical neuroscientists and machine learning researchers have proposed a variety of rules to enable artificial neural networks effectively perform both supervised unsupervised tasks. It is not always clear, however, how these theoretically-derived relate biological mechanisms plasticity in the brain, or different might be mechanistically implemented contexts brain regions. This study shows that calcium control hypothesis, which relates synaptic concentration ([Ca 2+ ]) dendritic spines, can produce diverse array rules. We propose simple, perceptron-like neuron model has four sources [Ca ]: local (following activation an excitatory synapse confined synapse), heterosynaptic (resulting from activity other synapses), postsynaptic spike-dependent , supervisor-dependent . demonstrate by modulating thresholds influx each source, we reproduce wide range protocols, such as Hebbian anti-Hebbian learning, frequency-dependent plasticity, recognition frequently repeating input patterns. Moreover, devising simple circuits provide supervisory signals, show calcitron implement homeostatic perceptron BTSP-inspired one-shot learning. Our bridges gap between theoretical algorithms their counterparts, only replicating established paradigms but also introducing novel

Language: Английский

Citations

1

Recurrent predictive coding models for associative memory employing covariance learning DOI Creative Commons
Mufeng Tang, Tommaso Salvatori, Beren Millidge

et al.

PLoS Computational Biology, Journal Year: 2023, Volume and Issue: 19(4), P. e1010719 - e1010719

Published: April 14, 2023

The computational principles adopted by the hippocampus in associative memory (AM) tasks have been one of most studied topics and theoretical neuroscience. Recent theories suggested that AM predictive activities could be described within a unitary account, coding underlies computations supporting hippocampus. Following this theory, model based on classical hierarchical networks was proposed shown to perform well various tasks. However, fully did not incorporate recurrent connections, an architectural component CA3 region is crucial for AM. This makes structure inconsistent with known connectivity models such as Hopfield Networks, which learn covariance inputs through their connections Earlier PC information explicitly via seem solution these issues. Here, we show although can AM, they do it implausible numerically unstable way. Instead, propose alternatives earlier covariance-learning networks, implicitly plausibly, use dendritic structures encode prediction errors. We analytically our are perfectly equivalent learning explicitly, encounter no numerical issues when performing practice. further combined hippocampo-neocortical interactions. Our provide biologically plausible approach modelling hippocampal network, pointing potential mechanism during formation recall, employs both network

Language: Английский

Citations

18