Data-knowledge co-driven innovations in engineering and management DOI Creative Commons
Yingji Xia, Xiqun Chen,

Sudan Sun

и другие.

Patterns, Год журнала: 2024, Номер 5(12), С. 101114 - 101114

Опубликована: Дек. 1, 2024

Язык: Английский

The neurobench framework for benchmarking neuromorphic computing algorithms and systems DOI Creative Commons
Jason Yik,

Korneel Van den Berghe,

Douwe den Blanken

и другие.

Nature Communications, Год журнала: 2025, Номер 16(1)

Опубликована: Фев. 11, 2025

Язык: Английский

Процитировано

3

Toward Cognitive Machines: Evaluating Single Device Based Spiking Neural Networks for Brain-Inspired Computing DOI
Faisal Bashir, Ali Alzahrani, Haider Abbas

и другие.

ACS Applied Electronic Materials, Год журнала: 2025, Номер unknown

Опубликована: Фев. 14, 2025

A brain-inspired computing paradigm known as "neuromorphic computing" seeks to replicate the information processing processes of biological neural systems in order create that are effective, low-power, and adaptable. Spiking networks (SNNs) based on a single device at forefront computing, which aims mimic powers human brain. Neuromorphic devices, enable hardware implementation artificial (ANNs), heart neuromorphic computing. These devices dynamics functions neurons synapses. This mini-review assesses latest advancements with an emphasis small, energy-efficient synapses neurons. Key like spike-timing-dependent plasticity, multistate storage, dynamic filtering demonstrated by variety single-device models, such memristors, transistors, magnetic ferroelectric devices. The integrate-and-fire (IF) neuron is key model these because it allows for mathematical analysis while successfully capturing aspects processing. review examines potential SNNs scalable, low-power applications, highlighting both benefits constraints implementing them architectures. highlights increasing importance creation flexible cognitive

Язык: Английский

Процитировано

2

Exploiting deep learning accelerators for neuromorphic workloads DOI Creative Commons
Pao-Sheng Vincent Sun, Alexander Titterton,

Anjlee Gopiani

и другие.

Neuromorphic Computing and Engineering, Год журнала: 2024, Номер 4(1), С. 014004 - 014004

Опубликована: Янв. 29, 2024

Abstract Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms energy consumption and latency when performing inference with deep learning workloads. Error backpropagation is presently regarded as the most effective method for training SNNs, but a twist irony, on modern graphics processing units this becomes more expensive than non-spiking networks. The emergence Graphcore’s intelligence (IPUs) balances parallelized nature workloads sequential, reusable, sparsified operations prevalent SNNs. IPUs adopt multi-instruction multi-data parallelism by running individual threads smaller data blocks, which natural fit non-vectorized steps required to solve spiking neuron dynamical state equations. We present an IPU-optimized release our custom SNN Python package, snnTorch , exploits fine-grained utilizing low-level, pre-compiled accelerate irregular sparse access patterns that are characteristic provide rigorous performance assessment across suite commonly used models, propose methods further reduce run-time via half-precision training. By amortizing cost sequential into vectorizable population codes, we ultimately demonstrate potential integrating domain-specific accelerators next generation

Язык: Английский

Процитировано

3

A photonics perspective on computing with physical substrates DOI Creative Commons
Stella Abreu, И. В. Бойков, Michel Goldmann

и другие.

Reviews in Physics, Год журнала: 2024, Номер 12, С. 100093 - 100093

Опубликована: Июнь 15, 2024

We provide a perspective on the fundamental relationship between physics and computation, exploring conditions under which physical system can be harnessed for computation practical means to achieve this. Unlike traditional digital computers that impose discrete nature continuous substrates, unconventional computing embraces inherent properties of systems. Exploring simultaneously intricacies implementations applied computational paradigms, we discuss interdisciplinary developments computing. Here, focus potential photonic substrates computing, implementing artificial neural networks solve data-driven machine learning tasks. Several network are discussed, highlighting their advantages over electronic counterparts in terms speed energy efficiency. Finally, address challenges achieving programmability within outlining key strategies future research.

Язык: Английский

Процитировано

3

Slax: a composable JAX library for rapid and flexible prototyping of spiking neural networks DOI Creative Commons

Thomas M. Summe,

Siddharth Joshi

Neuromorphic Computing and Engineering, Год журнала: 2025, Номер 5(1), С. 014007 - 014007

Опубликована: Янв. 17, 2025

Abstract Spiking neural networks (SNNs) offer rich temporal dynamics and unique capabilities, but their training presents challenges. While backpropagation through time with surrogate gradients is the defacto standard for SNNs, it scales poorly long sequences. Alternative learning rules algorithms could help further develop models systems across spectrum of performance, bio-plausibility, complexity. However, these alternatives are not consistently implemented same, if any, SNN framework, often complicating comparison use. To address this, we introduce Slax, a JAX-based library designed to accelerate algorithm design evaluation. Slax compatible broader JAX Flax ecosystem provides optimized implementations diverse algorithms, enabling direct performance comparisons. Its toolkit includes methods visualize debug loss landscapes, gradient similarities, other metrics model behavior during training. By streamlining implementation evaluation novel aims facilitate research development in this promising field.

Язык: Английский

Процитировано

0

All-Josephson junction logic cells and bio-inspired neuron based on 00π junction inductorless blocks DOI

A. A. Maksimovskaya,

V. I. Ruzhickiy, N. V. Klenov

и другие.

Chaos Solitons & Fractals, Год журнала: 2025, Номер 193, С. 116074 - 116074

Опубликована: Фев. 6, 2025

Процитировано

0

Spiking neural networks on FPGA: A survey of methodologies and recent advancements DOI
Mehrzad Karamimanesh, Ebrahim Abiri, Mahyar Shahsavari

и другие.

Neural Networks, Год журнала: 2025, Номер 186, С. 107256 - 107256

Опубликована: Фев. 14, 2025

Язык: Английский

Процитировано

0

Antifragile control systems in neuronal processing: a sensorimotor perspective DOI Creative Commons
Cristian Axenie

Biological Cybernetics, Год журнала: 2025, Номер 119(2-3)

Опубликована: Фев. 15, 2025

Язык: Английский

Процитировано

0

ChemComp: A Compilation Framework for Computing with Chemical Reaction Networks DOI
Nicolas Bohm Agostini, Connah Johnson, William R. Cannon

и другие.

Proceedings of the 28th Asia and South Pacific Design Automation Conference, Год журнала: 2025, Номер unknown, С. 872 - 878

Опубликована: Янв. 20, 2025

Язык: Английский

Процитировано

0

The road to commercial success for neuromorphic technologies DOI Creative Commons
Dylan R. Muir, Sadique Sheik

Nature Communications, Год журнала: 2025, Номер 16(1)

Опубликована: Апрель 15, 2025

Neuromorphic technologies adapt biological neural principles to synthesise high-efficiency computational devices, characterised by continuous real-time operation and sparse event-based communication. After several false starts, a confluence of advances now promises widespread commercial adoption. Gradient-based training deep spiking networks is an off-the-shelf technique for building general-purpose applications, with open-source tools underwritten theoretical results. Analog mixed-signal circuit designs are being replaced digital equivalents in newer simplifying application deployment while maintaining benefits. Designs in-memory computing also approaching maturity. Solving two key problems-how program general applications; how deploy them at scale-clears the way success processors. Ultra-low-power technology will find home battery-powered systems, local compute internet-of-things consumer wearables. Inspiration from uptake tensor processors GPUs can help field overcome remaining hurdles.

Язык: Английский

Процитировано

0