Using computer vision to classify, locate and segment fire behavior in UAS-captured images DOI Creative Commons
Brett Lawrence, Emerson de Lemmus

Science of Remote Sensing, Journal Year: 2024, Volume and Issue: unknown, P. 100167 - 100167

Published: Sept. 1, 2024

Language: Английский

TFNet: Transformer-Based Multi-Scale Feature Fusion Forest Fire Image Detection Network DOI Creative Commons
Hongying Liu, Fuquan Zhang, Yiqing Xu

et al.

Fire, Journal Year: 2025, Volume and Issue: 8(2), P. 59 - 59

Published: Jan. 30, 2025

Forest fires pose a severe threat to ecological environments and the safety of human lives property, making real-time forest fire monitoring crucial. This study addresses challenges in image object detection, including small targets, sparse smoke, difficulties feature extraction, by proposing TFNet, Transformer-based multi-scale fusion detection network. TFNet integrates several components: SRModule, CG-MSFF Encoder, Decoder Head, WIOU Loss. The SRModule employs multi-branch structure learn diverse representations images, utilizing 1 × convolutions generate redundant maps enhance diversity. Encoder introduces context-guided attention mechanism combined with adaptive (AFF), enabling effective reweighting features across layers extracting both local global representations. Head refine output iteratively optimizing target queries using self- cross-attention, improving accuracy. Additionally, Loss assigns varying weights IoU metric for predicted versus ground truth boxes, thereby balancing positive negative samples localization Experimental results on two publicly available datasets, D-Fire M4SFWD, demonstrate that outperforms comparative models terms precision, recall, F1-Score, mAP50, mAP50–95. Specifically, dataset, achieved metrics 81.6% 74.8% an F1-Score 78.1%, mAP50 81.2%, mAP50–95 46.8%. On M4SFWD these improved 86.6% 83.3% 84.9%, 89.2%, 52.2%. proposed offers technical support developing efficient practical systems.

Language: Английский

Citations

5

Enhancing Human Detection in Occlusion-Heavy Disaster Scenarios: A Visibility-Enhanced DINO (VE-DINO) Model with Reassembled Occlusion Dataset DOI Creative Commons

Zonghang Zhao,

Shidan Wang, Minxin Chen

et al.

Smart Cities, Journal Year: 2025, Volume and Issue: 8(1), P. 12 - 12

Published: Jan. 16, 2025

Natural disasters create complex environments where effective human detection is both critical and challenging, especially when individuals are partially occluded. While recent advancements in computer vision have improved capabilities, there remains a significant need for efficient solutions that can enhance search-and-rescue (SAR) operations resource-constrained disaster scenarios. This study modified the original DINO (Detection Transformer with Improved Denoising Anchor Boxes) model introduced visibility-enhanced (VE-DINO) model, designed robust occlusion-heavy environments, potential integration into SAR system. VE-DINO enhances accuracy by incorporating body part key point information employing specialized loss function. The was trained validated using COCO2017 dataset, additional external testing conducted on Disaster Occlusion Detection Dataset (DODD), which we developed meticulously compiling relevant images from existing public datasets to represent occlusion scenarios contexts. achieved an average precision of 0.615 at IoU 0.50:0.90 all bounding boxes, outperforming (0.491) set. 0.500. An ablation demonstrated robustness subject confronted varying degrees occlusion. Furthermore, illustrate practicality, case demonstrating usability integrated unmanned aerial vehicle (UAV)-based system, showcasing its real-world

Language: Английский

Citations

2

YOLOv7scb: A Small-Target Object Detection Method for Fire Smoke Inspection DOI Creative Commons
Dan Shao, Yü Liu,

Guoxing Liu

et al.

Fire, Journal Year: 2025, Volume and Issue: 8(2), P. 62 - 62

Published: Feb. 4, 2025

Fire detection presents considerable challenges due to the destructive and unpredictable characteristics of fires. These difficulties are amplified by small size low-resolution nature fire smoke targets in images captured from a distance, making it hard for models extract relevant features. To address this, we introduce novel method small-target named YOLOv7scb. This approach incorporates two key improvements YOLOv7 framework: use space-to-depth convolution (SPD-Conv) C3 modules, enhancing model’s ability features effectively. Additionally, weighted bidirectional feature pyramid network (BiFPN) is integrated into feature-extraction merge across scales efficiently without increasing complexity. We also replace conventional complete intersection over union (CIoU) loss function with Focal-CIoU, which reduces degrees freedom improves robustness. Given limited initial dataset, transfer-learning strategy applied during training. Experimental results demonstrate that our proposed model surpasses others metrics such as precision recall. Notably, achieves 98.8% flame 90.6% detection. findings underscore effectiveness its broad potential mitigation applications.

Language: Английский

Citations

1

Lightweight Deep Learning Model for Fire Classification in Tunnels DOI Creative Commons
Shakhnoza Muksimova, Sabina Umirzakova,

Jushkin Baltayev

et al.

Fire, Journal Year: 2025, Volume and Issue: 8(3), P. 85 - 85

Published: Feb. 20, 2025

Tunnel fires pose a severe threat to human safety and infrastructure, necessitating the development of advanced efficient fire detection systems. This paper presents novel lightweight deep learning (DL) model specifically designed for real-time classification in tunnel environments. integrates MobileNetV3 spatial feature extraction, Temporal Convolutional Networks (TCNs) temporal sequence analysis, attention mechanisms, including Block Attention Modules (CBAMs) Squeeze-and-Excitation (SE) blocks, prioritize critical features such as flames smoke patterns while suppressing irrelevant noise. The is trained on custom dataset containing real incidents generated using newly prepared dataset. approach enhances generalization capabilities, enabling it handle diverse scenarios, those with low visibility, high density, variable ventilation conditions. Deployment optimizations, quantization layer fusion, ensure computational efficiency, achieving an average inference time 12ms/frame, making suitable resource-constrained environments like IoT edge devices. experimental results demonstrate that proposed achieves accuracy 96.5%, precision 95.7%, recall 97.2%, significantly outperforming state-of-the-art (SOTA) models ResNet50 YOLOv5 both performance. Robustness tests under challenging conditions validate reliability adaptability, marking advancement study provides valuable insights into design deployment systems safety-critical applications. offers scalable, high-performance solution monitoring establishes benchmark future research video-based complex environmental

Language: Английский

Citations

1

Fire and Smoke Detection in Complex Environments DOI Creative Commons
Furkat Safarov, Shakhnoza Muksimova,

Misirov Kamoliddin

et al.

Fire, Journal Year: 2024, Volume and Issue: 7(11), P. 389 - 389

Published: Oct. 29, 2024

Fire detection is a critical task in environmental monitoring and disaster prevention, with traditional methods often limited their ability to detect fire smoke real time over large areas. The rapid identification of both indoor outdoor environments essential for minimizing damage ensuring timely intervention. In this paper, we propose novel approach by integrating vision transformer (ViT) the YOLOv5s object model. Our modified model leverages attention-based feature extraction capabilities ViTs improve accuracy, particularly complex where fires may be occluded or distributed across regions. By replacing CSPDarknet53 backbone ViT, able capture local global dependencies images, resulting more accurate under challenging conditions. We evaluate performance proposed using comprehensive Smoke Detection Dataset, which includes diverse real-world scenarios. results demonstrate that our outperforms baseline YOLOv5 variants terms precision, recall, mean average precision (mAP), achieving [email protected] 0.664 recall 0.657. ViT shows significant improvements detecting smoke, scenes backgrounds varying scales. findings suggest integration as offers promising real-time urban natural environments.

Language: Английский

Citations

6

Advanced Object Detection for Maritime Fire Safety DOI Creative Commons

Fazliddin Makhmudov,

Sabina Umirzakova,

Alpamis Kutlimuratov

et al.

Fire, Journal Year: 2024, Volume and Issue: 7(12), P. 430 - 430

Published: Nov. 25, 2024

In this study, we propose an advanced object detection model for fire and smoke in maritime environments, leveraging the DETR (Detection with Transformers) framework. To address specific challenges of shipboard detection, such as varying lighting conditions, occlusions, complex structure ships, enhance baseline by integrating EfficientNet-B0 backbone. This modification aims to improve accuracy while maintaining computational efficiency. We utilize a custom dataset images captured from diverse incorporating range data augmentation techniques increase robustness. The proposed is evaluated against YOLOv5 variants, showing significant improvements Average Precision (AP), especially detecting small medium-sized objects. Our achieves superior AP score 38.7 outperforms alternative models across multiple IoU thresholds (AP50, AP75), particularly scenarios requiring high precision occluded experimental results highlight model’s efficacy early demonstrating its potential deployment real-time safety monitoring systems. These findings provide foundation future research aimed at enhancing challenging environments.

Language: Английский

Citations

3

Intelligent video-based fire detection: A novel dataset and real-time multi-stage classification approach DOI
Himani Sharma, Navdeep Kanwal

Expert Systems with Applications, Journal Year: 2025, Volume and Issue: unknown, P. 126655 - 126655

Published: Jan. 1, 2025

Citations

0

Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy DOI Creative Commons

Bingxin Yu,

Shengze Yu,

Yuandi Zhao

et al.

Drones, Journal Year: 2025, Volume and Issue: 9(5), P. 348 - 348

Published: May 3, 2025

This study aims to improve the accuracy of fire source detection, efficiency path planning, and precision firefighting operations in drone swarms during emergencies. It proposes an intelligent technology for based on multi-sensor integrated planning. The integrates You Only Look Once version 8 (YOLOv8) algorithm its optimization strategies enhance real-time detection capabilities. Additionally, this employs data fusion swarm cooperative path-planning techniques optimize deployment materials flight paths, thereby improving precision. First, a deformable convolution module is introduced into backbone network YOLOv8 enable flexibly adjust receptive field when processing targets, enhancing accuracy. Second, attention mechanism incorporated neck portion YOLOv8, which focuses feature regions, significantly reducing interference from background noise further recognition complex environments. Finally, new High Intersection over Union (HIoU) loss function proposed address challenge computing localization classification targets. dynamically adjusts weight various components training, achieving more precise classification. In terms visual sensors, infrared LiDAR sensors adopts Information Acquisition Optimizer (IAO) Catch Fish Optimization Algorithm (CFOA) plan paths coordinated swarms. By adjusting planning locations, can reach sources shortest possible time carry out operations. Experimental results demonstrate that improves by optimizing algorithm, algorithms, strategies. optimized achieved 94.6% small fires, with false rate reduced 5.4%. wind speed compensation strategy effectively mitigated impact material deployment. not only enhances but also enables rapid response scenarios, offering broad application prospects, particularly urban forest disaster rescue.

Language: Английский

Citations

0

Optimizing fire detection in remote sensing imagery for edge devices: A quantization-enhanced hybrid deep learning model DOI Creative Commons

Syed Muhammad Salman Bukhari,

Nadia Dahmani,

Sujan Gyawali

et al.

Displays, Journal Year: 2025, Volume and Issue: unknown, P. 103070 - 103070

Published: May 1, 2025

Language: Английский

Citations

0

FUR-DETR: A Lightweight Detection Model for Fixed-Wing UAV Recovery DOI Creative Commons
Yu Yao, Jun Wu,

Yongchang Hao

et al.

Drones, Journal Year: 2025, Volume and Issue: 9(5), P. 365 - 365

Published: May 13, 2025

Due to traditional recovery systems lacking visual perception, it is difficult monitor UAVs’ real-time status in communication-constrained or GPS-denied environments. This leads insufficient ability decision-making and parameter adjustment increase uncertainty risk of recovery. Visual inspection technology can make up for the limitations GPS communication improve autonomy adaptability system. However, existing RT-DETR algorithm limited by single-path feature extraction, a simplified fusion mechanism, high-frequency information loss, which makes balance detection accuracy computational efficiency. Therefore, this paper proposes lightweight model based on transformer architecture further optimize Firstly, aiming at performance bottleneck models, Parallel Backbone proposed, captures local features global semantic sharing initial extraction module double-branch structure, respectively, uses progressive mechanism realize adaptive integration multiscale so as lightness target detection. Secondly, an pyramid network (AMFPN) designed, effectively integrates different scales through multi-level transmission alleviates problem loss small-target detection, improves complex backgrounds. Finally, wavelet frequency–domain-optimized reverse (WT-FORM) proposed. By using transform decompose shallow into multi-frequency bands combining weighted calculation compensation strategy, complexity reduced, representation context enhanced. The experimental results show that improved reduces size load 43.2% 58% while maintaining comparable original three datasets. Even environments with low light, occlusion, small targets, provide more accurate results.

Language: Английский

Citations

0