Spike-HAR++: an energy-efficient and lightweight parallel spiking transformer for event-based human action recognition DOI Creative Commons

Xinxu Lin,

Mingxuan Liu, Hong Chen

et al.

Frontiers in Computational Neuroscience, Journal Year: 2024, Volume and Issue: 18

Published: Nov. 26, 2024

Event-based cameras are suitable for human action recognition (HAR) by providing movement perception with highly dynamic range, high temporal resolution, power efficiency and low latency. Spike Neural Networks (SNNs) naturally suited to deal the asynchronous sparse data from event due their spike-based event-driven paradigm, less consumption compared artificial neural networks. In this paper, we propose two end-to-end SNNs, namely Spike-HAR Spike-HAR++, introduce spiking transformer into event-based HAR. includes novel blocks: a spike attention branch, which enables model focus on regions rates, reducing impact of noise improve accuracy, parallel block simplified self-attention mechanism, increasing computational efficiency. To better extract crucial information high-level features, modify architecture branch extend it in higher dimension, proposing Spike-HAR++ further enhance classification performance. Comprehensive experiments were conducted four HAR datasets: SL-Animals-DVS, N-LSA64, DVS128 Gesture DailyAction-DVS, demonstrate superior performance our proposed model. Additionally, require only 0.03 0.06 mJ, respectively, process sequence frames, sizes 0.7 1.8 M. This positions as promising new SNN baseline community. Code is available at Spike-HAR++.

Language: Английский

Towards parameter-free attentional spiking neural networks DOI
Pengfei Sun, Jibin Wu, Paul Devos

et al.

Neural Networks, Journal Year: 2025, Volume and Issue: 185, P. 107154 - 107154

Published: Jan. 16, 2025

Language: Английский

Citations

1

Leakage Fault Diagnosis of Oil and Gas Pipelines Based on Improved Spiking Residual Network DOI
Dongmei Wang, Dan Zhang, Yang Wu

et al.

Flow Measurement and Instrumentation, Journal Year: 2025, Volume and Issue: unknown, P. 102865 - 102865

Published: Feb. 1, 2025

Language: Английский

Citations

0

TR-SNN: a lightweight spiking neural network based on tensor ring decomposition DOI Creative Commons
Shifeng Mao, Baoxin Yang, Hongze Sun

et al.

Brain-Apparatus Communication A Journal of Bacomics, Journal Year: 2025, Volume and Issue: 4(1)

Published: Feb. 27, 2025

Language: Английский

Citations

0

Biomotion-Snn: Spiking Neural Network Modeling for Visual Motion Processing DOI
Ying Liu,

Jiajun Mei,

Tingting Feng

et al.

Published: Jan. 1, 2025

Language: Английский

Citations

0

BISNN: bio-information-fused spiking neural networks for enhanced EEG-based emotion recognition DOI
Hongze Sun, Shifeng Mao,

Wuque Cai

et al.

Cognitive Neurodynamics, Journal Year: 2025, Volume and Issue: 19(1)

Published: March 22, 2025

Language: Английский

Citations

0

Tensor decomposition based attention module for spiking neural networks DOI
Haoyu Deng, Ruijie Zhu, Xuerui Qiu

et al.

Knowledge-Based Systems, Journal Year: 2024, Volume and Issue: 295, P. 111780 - 111780

Published: April 15, 2024

Language: Английский

Citations

2

Reliable object tracking by multimodal hybrid feature extraction and transformer-based fusion DOI
Hongze Sun, Rui Liu,

Wuque Cai

et al.

Neural Networks, Journal Year: 2024, Volume and Issue: 178, P. 106493 - 106493

Published: June 28, 2024

Language: Английский

Citations

2

STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks DOI Creative Commons

Xiyan Wu,

Yong Song, Ya Zhou

et al.

Frontiers in Neuroscience, Journal Year: 2023, Volume and Issue: 17

Published: Nov. 10, 2023

Spiking Neural Networks (SNNs) have shown great promise in processing spatio-temporal information compared to Artificial (ANNs). However, there remains a performance gap between SNNs and ANNs, which impedes the practical application of SNNs. With intrinsic event-triggered property temporal dynamics, potential effectively extract features from event streams. To leverage SNNs, we propose self-attention-based temporal-channel joint attention SNN (STCA-SNN) with end-to-end training, infers weights along both channel dimensions concurrently. It models global correlations self-attention, enabling network learn 'what' 'when' attend simultaneously. Our experimental results show that STCA-SNNs achieve better on N-MNIST (99.67%), CIFAR10-DVS (81.6%), N-Caltech 101 (80.88%) state-of-the-art Meanwhile, our ablation study demonstrates improve accuracy stream classification tasks.

Language: Английский

Citations

5

Spike-Visnet: A Novel Framework for Visual Recognition with Focuslayer-Stdp Learning DOI
Ying Liu, Xiaoling Luo,

Ya Zhang

et al.

Published: Jan. 1, 2024

Language: Английский

Citations

1

DSAN: Exploring the Relationship between Deformable Convolution and Spatial Attention DOI Creative Commons
Zewen Yu, Xiaoqin Zhang, Zhao Li

et al.

Published: April 16, 2024

Language: Английский

Citations

0