YOLO-SAD for fire detection and localization in real-world images DOI
Ruixin Yang, Jun Jiang, Feiyang Liu

et al.

Digital Signal Processing, Journal Year: 2025, Volume and Issue: unknown, P. 105320 - 105320

Published: May 1, 2025

Language: Английский

Improving Fire and Smoke Detection with You Only Look Once 11 and Multi-Scale Convolutional Attention DOI Creative Commons
Yuxuan Li,

Lisha Nie,

Fangrong Zhou

et al.

Fire, Journal Year: 2025, Volume and Issue: 8(5), P. 165 - 165

Published: April 22, 2025

Fires pose significant threats to human safety, health, and property. Traditional methods, with their inefficient use of features, struggle meet the demands fire detection. You Only Look Once (YOLO), as an efficient deep learning object detection framework, can rapidly locate identify smoke objects in visual images. However, research utilizing latest YOLO11 for remains sparse, addressing scale variability well practicality models continues be a focus. This study first compares classic YOLO series analyze its advantages tasks. Then, tackle challenges model practicality, we propose Multi-Scale Convolutional Attention (MSCA) mechanism, integrating it into create YOLO11s-MSCA. Experimental results show that outperforms other by balancing accuracy, speed, practicality. The YOLO11s-MSCA performs exceptionally on D-Fire dataset, improving overall accuracy 2.6% recognition 2.8%. demonstrates stronger ability small objects. Although remain handling occluded targets complex backgrounds, exhibits strong robustness generalization capabilities, maintaining performance complicated environments.

Language: Английский

Citations

0

An efficient fire detection algorithm based on Mamba space state linear attention DOI Creative Commons

Yuming Li,

Yongjie Wang, Xiaorui Shao

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: April 2, 2025

As an emerging State Space Model (SSM), the Mamba model draws inspiration from architecture of Recurrent Neural Networks (RNNs), significantly enhancing global receptive field and feature extraction capabilities object detection models. Compared to traditional Convolutional (CNNs) Transformers, demonstrates superior performance in handling complex scale variations multi-view interference, making it particularly suitable for tasks dynamic environments such as fire scenarios. To enhance visual technologies provide a novel approach, this paper proposes efficient algorithm based on YOLOv9 introduces multiple key techniques design high-performance leveraging attention mechanism. First, presents mechanism, Efficient Attention (EMA) module. Unlike existing self-attention mechanisms, EMA integrates adaptive average pooling with SSM module, eliminating need full-scale association computations across all positions. Instead, performs dimensionality reduction input features through utilizes state update mechanism module representation optimize information flow. Second, address limitations models local modeling, study incorporates ConvNeXtV2 backbone network, improving model's ability capture fine-grained details thereby strengthening its overall capability. Additionally, non-monotonic focusing distance penalty strategy are employed refine loss function, leading substantial improvement bounding box accuracy. Experimental results demonstrate proposed method tasks. The achieves FPS 71, [Formula: see text] 91.0% large-scale dataset 87.2% small-scale dataset. methods, approach maintains high while exhibiting significant computational efficiency advantages.

Language: Английский

Citations

0

YOLO-SAD for fire detection and localization in real-world images DOI
Ruixin Yang, Jun Jiang, Feiyang Liu

et al.

Digital Signal Processing, Journal Year: 2025, Volume and Issue: unknown, P. 105320 - 105320

Published: May 1, 2025

Language: Английский

Citations

0