Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms to Hardware DOI Open Access
Nitin Rathi, Indranil Chakraborty, Adarsh Kumar Kosta

et al.

ACM Computing Surveys, Journal Year: 2022, Volume and Issue: 55(12), P. 1 - 49

Published: Nov. 17, 2022

Neuromorphic Computing, a concept pioneered in the late 1980s, is receiving lot of attention lately due to its promise reducing computational energy, latency, as well learning complexity artificial neural networks. Taking inspiration from neuroscience, this interdisciplinary field performs multi-stack optimization across devices, circuits, and algorithms by providing an end-to-end approach achieving brain-like efficiency machine intelligence. On one side, neuromorphic computing introduces new algorithmic paradigm, known Spiking Neural Networks (SNNs), which significant shift standard deep transmits information spikes (“1” or “0”) rather than analog values. This has opened up novel research directions formulate methods represent data spike-trains, develop neuron models that can process over time, design for event-driven dynamical systems, engineer network architectures amenable sparse, asynchronous, achieve lower power consumption. other parallel thrust focuses on development efficient platforms algorithms. Standard accelerators are workloads not particularly suitable handle processing multiple timesteps efficiently. To effect, researchers have designed hardware rely sparse computations matrix operations. While most large-scale systems been explored based CMOS technology, recently, Non-Volatile Memory (NVM) technologies show toward implementing bio-mimetic functionalities single devices. In article, we outline several strides spiking networks (SNNs) taken recent past, present our outlook challenges needs overcome make bio-plausibility route successful one.

Language: Английский

Memory devices and applications for in-memory computing DOI
Abu Sebastian, Manuel Le Gallo, Riduan Khaddam-Aljameh

et al.

Nature Nanotechnology, Journal Year: 2020, Volume and Issue: 15(7), P. 529 - 544

Published: March 30, 2020

Language: Английский

Citations

1496

Towards spike-based machine intelligence with neuromorphic computing DOI

Kaushik Roy,

Akhilesh Jaiswal, Priyadarshini Panda

et al.

Nature, Journal Year: 2019, Volume and Issue: 575(7784), P. 607 - 617

Published: Nov. 27, 2019

Language: Английский

Citations

1410

Deep learning in spiking neural networks DOI
Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh

et al.

Neural Networks, Journal Year: 2018, Volume and Issue: 111, P. 47 - 63

Published: Dec. 18, 2018

Language: Английский

Citations

1022

Deep Learning With Spiking Neurons: Opportunities and Challenges DOI Creative Commons
Michael Pfeiffer,

Thomas Pfeil

Frontiers in Neuroscience, Journal Year: 2018, Volume and Issue: 12

Published: Oct. 25, 2018

Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals communicated processed a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, event-driven processing. This makes them interesting candidates for the efficient implementation of deep networks, method choice many machine learning tasks. In this review, we address opportunities that spiking offer investigate detail challenges associated with training way competitive conventional learning, but simultaneously allows mapping to hardware. A wide range methods is presented, ranging from conversion into SNNs, constrained before conversion, variants backpropagation, biologically motivated STDP. The goal our review define categorization SNN methods, summarize their advantages drawbacks. We further discuss relationships between which becoming popular digital implementation. Neuromorphic platforms have great potential enable real-world applications. compare suitability various systems been developed over past years, use cases. approaches should not be considered simply two solutions same classes problems, instead it possible identify exploit task-specific advantages. Deep work new types event-based sensors, temporal codes local on-chip so far just scratched surface realizing these practical

Language: Английский

Citations

599

Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures DOI Creative Commons
Chankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda

et al.

Frontiers in Neuroscience, Journal Year: 2020, Volume and Issue: 14

Published: Feb. 28, 2020

Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures limited capacity for expressing complex representations, while training deep SNNs using input spikes has not been successful so far. Diverse methods proposed to get around this issue such converting off-line trained Artificial (ANNs) SNNs. ANN-SNN conversion scheme fails capture temporal dynamics of spiking system. On other hand, it is still difficult problem directly train spike events due discontinuous, non-differentiable nature generation function. To overcome problem, we propose an approximate derivative method that accounts leaky behavior LIF neurons. This enables convolutional (with events) spike-based backpropagation. Our experiments show effectiveness learning strategy on networks (VGG and Residual architectures) by achieving best classification accuracies in MNIST, SVHN CIFAR-10 datasets compared with learning. Moreover, analyze sparse event-based computations demonstrate efficacy inference operation domain.

Language: Английский

Citations

359

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks DOI
Wei Fang, Zhaofei Yu, Yanqi Chen

et al.

2021 IEEE/CVF International Conference on Computer Vision (ICCV), Journal Year: 2021, Volume and Issue: unknown, P. 2641 - 2651

Published: Oct. 1, 2021

Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility. However, the formulation of efficient high-performance learning algorithms for SNNs is still challenging. Most existing methods learn weights only, require manual tuning membrane-related parameters that determine dynamics a single spiking neuron. These are typically chosen be same all neurons, which limits diversity neurons thus expressiveness resulting SNNs. In this paper, we take inspiration from observation different across brain regions, propose training algorithm capable not only synaptic but also membrane time constants We show incorporating learnable can make network less sensitive initial values speed up learning. addition, reevaluate pooling in find max-pooling will lead significant loss advantage computation cost binary compatibility. evaluate proposed method image classification tasks on both traditional static MNIST, Fashion-MNIST, CIFAR-10 datasets, neuromorphic N-MNIST, CIFAR10-DVS, DVS128 Gesture datasets. The experiment results outperforms state-of-the-art accuracy nearly using fewer time-steps. Our codes available at https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron.

Language: Английский

Citations

345

A review of learning in biologically plausible spiking neural networks DOI
Aboozar Taherkhani, Ammar Belatreche, Yuhua Li

et al.

Neural Networks, Journal Year: 2019, Volume and Issue: 122, P. 253 - 272

Published: Oct. 12, 2019

Language: Английский

Citations

328

Spiking Neural Networks and Their Applications: A Review DOI Creative Commons
Kashu Yamazaki, Viet-Khoa Vo-Ho,

Darshan Bulsara

et al.

Brain Sciences, Journal Year: 2022, Volume and Issue: 12(7), P. 863 - 863

Published: June 30, 2022

The past decade has witnessed the great success of deep neural networks in various domains. However, are very resource-intensive terms energy consumption, data requirements, and high computational costs. With recent increasing need for autonomy machines real world, e.g., self-driving vehicles, drones, collaborative robots, exploitation those applications been actively investigated. In applications, efficiencies especially important because real-time responses limited supply. A promising solution to these previously infeasible recently given by biologically plausible spiking networks. Spiking aim bridge gap between neuroscience machine learning, using realistic models neurons carry out computation. Due their functional similarity biological network, can embrace sparsity found biology highly compatible with temporal code. Our contributions this work are: (i) we give a comprehensive review theories neurons; (ii) present existing spike-based neuron models, which have studied neuroscience; (iii) detail synapse models; (iv) provide artificial networks; (v) detailed guidance on how train (vi) revise available frameworks that developed support implementing (vii) finally, cover network computer vision robotics paper concludes discussions future perspectives.

Language: Английский

Citations

295

RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network DOI
Bing Han, Gopalakrishnan Srinivasan,

Kaushik Roy

et al.

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Journal Year: 2020, Volume and Issue: unknown

Published: June 1, 2020

Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. The best performing SNNs for image recognition tasks are obtained by converting a trained Analog Network (ANN), consisting Rectified Linear Units (ReLU), to SNN composed integrate-and-fire neurons with "proper" firing thresholds. converted typically incur loss in accuracy compared provided original ANN and require sizable number inference time-steps achieve accuracy. We find performance degradation stems from using "hard reset" spiking neuron is driven fixed reset potential once its membrane exceeds threshold, leading information during inference. propose ANN-SNN conversion "soft model, referred Residual Membrane Potential (RMP) neuron, which retains "residual" above threshold at instants. demonstrate near loss-less RMP VGG-16, ResNet-20, ResNet-34 on challenging datasets including CIFAR-10 (93.63% top-1), CIFAR-100 (70.93% ImageNet (73.09% top-1 accuracy). Our results also show RMP-SNN surpasses 2-8 times fewer across network architectures datasets.

Language: Английский

Citations

243

BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python DOI Creative Commons
Hananel Hazan,

Daniel J. Saunders,

Hassaan Furqan Khan

et al.

Frontiers in Neuroinformatics, Journal Year: 2018, Volume and Issue: 12

Published: Dec. 12, 2018

The development of spiking neural network simulation software is a critical component enabling the modeling systems and biologically inspired algorithms. Existing frameworks support wide range functionality, abstraction levels, hardware devices, yet are typically not suitable for rapid prototyping or application to problems in domain machine learning. In this paper, we describe new Python package networks, specifically geared towards learning reinforcement Our software, called \texttt{BindsNET}, enables building networks features user-friendly, concise syntax. \texttt{BindsNET} built on \texttt{PyTorch} deep library, facilitating implementation fast CPU GPU computational platforms. Moreover, framework can be adjusted utilize other existing computing backends; e.g., \texttt{TensorFlow} \texttt{SpiNNaker}. We provide an interface with OpenAI \texttt{gym} allowing training evaluation environments. argue that facilitates use large-scale show some simple examples by using practice. \blfootnote{\texttt{BindsNET} code available at \texttt{https://github.com/Hananel-Hazan/bindsnet}. To install version used \texttt{pip bindsnet=0.2.1}.}

Language: Английский

Citations

242