A generative AI model for forest fire prediction and detection DOI

M. Nallusamy,

S. Karthick,

B. Vetrivel

и другие.

i-manager's Journal on Data Science & Big Data Analytics (JDS)., Год журнала: 2024, Номер 2(2), С. 40 - 40

Опубликована: Янв. 1, 2024

Forest fires pose significant threats to forest ecosystems, impacting humans, animals, and plants reliant on these environments. Traditional detection methods rely handcrafted features like color, motion, texture, yet achieving accuracy remains challenging. This study introduces a novel approach using lightweight fire method employing Deep Convolution Neural Networks (DCNN), considering temporal aspects for enhanced accuracy. By leveraging DCNN, this aims improve capabilities, mitigating the devastating effects of wildfires both natural habitats communities. represents promising advancement in field, offering potential solutions ongoing challenge timely accurate detection.

Язык: Английский

An Anomaly Detection Method for UAV Based on Wavelet Decomposition and Stacked Denoising Autoencoder DOI Creative Commons
Shenghan Zhou, Zhao He, Xu Chen

и другие.

Aerospace, Год журнала: 2024, Номер 11(5), С. 393 - 393

Опубликована: Май 14, 2024

The paper proposes an anomaly detection method for UAVs based on wavelet decomposition and stacked denoising autoencoder. This takes the negative impact of noisy data feature extraction capabilities deep learning models into account. It aims to improve accuracy proposed with autoencoder methods. Anomaly UAV flight is important condition monitoring potential abnormal state mining, which means reduce risk accidents. However, diversity mission scenarios leads a complex harsh environment, so acquired are affected by noise, brings challenges accurate data. Firstly, we use denoise original data; then, used achieve extraction. Finally, softmax classifier realize UAV. experimental results demonstrate that still has good performance in case Specifically, Accuracy reaches 97.53%, Precision 97.50%, Recall 91.81%, F1-score 94.57%. Furthermore, outperforms four comparison more outstanding performance. Therefore, it significant reducing accidents enhancing operational safety.

Язык: Английский

Процитировано

14

Visual fire detection using deep learning: A survey DOI
Guangtao Cheng, Xue Chen, Chenyi Wang

и другие.

Neurocomputing, Год журнала: 2024, Номер 596, С. 127975 - 127975

Опубликована: Июнь 1, 2024

Язык: Английский

Процитировано

11

Real-Time Fire Detection: Integrating Lightweight Deep Learning Models on Drones with Edge Computing DOI Creative Commons
Md Fahim Shahoriar Titu,

Mahir Afser Pavel,

Michael Kah Ong Goh

и другие.

Drones, Год журнала: 2024, Номер 8(9), С. 483 - 483

Опубликована: Сен. 13, 2024

Fire accidents are life-threatening catastrophes leading to losses of life, financial damage, climate change, and ecological destruction. Promptly efficiently detecting extinguishing fires is essential reduce the loss lives damage. This study uses drone, edge computing, artificial intelligence (AI) techniques, presenting novel methods for real-time fire detection. proposed work utilizes a comprehensive dataset 7187 images advanced deep learning models, e.g., Detection Transformer (DETR), Detectron2, You Only Look Once YOLOv8, Autodistill-based knowledge distillation techniques improve model performance. The approach has been implemented with YOLOv8m (medium) as teacher (base) model. distilled (student) frameworks developed employing YOLOv8n (Nano) DETR techniques. attains best performance 95.21% detection accuracy 0.985 F1 score. A powerful hardware setup, including Raspberry Pi 5 microcontroller, camera module 3, DJI F450 custom-built constructed. deployed in setup identification. achieves 89.23% an approximate frame rate 8 conducted live experiments. Integrating drone devices demonstrates system’s effectiveness potential practical applications hazard mitigation.

Язык: Английский

Процитировано

11

Real-Time Detection of Smoke and Fire in the Wild Using Unmanned Aerial Vehicle Remote Sensing Imagery DOI Open Access
Xijian Fan,

Fan Lei,

Kun Yang

и другие.

Forests, Год журнала: 2025, Номер 16(2), С. 201 - 201

Опубликована: Янв. 22, 2025

Detecting wildfires and smoke is essential for safeguarding forest ecosystems offers critical information the early evaluation prevention of such incidents. The advancement unmanned aerial vehicle (UAV) remote sensing has further enhanced detection smoke, which enables rapid accurate identification. This paper presents an integrated one-stage object framework designed simultaneous identification in UAV imagery. By leveraging mixed data augmentation techniques, enriches dataset with small targets to enhance its performance targets. A novel backbone enhancement strategy, integrating region convolution feature refinement modules, developed facilitate ability localize features high transparency within complex backgrounds. shape aware loss function, proposed effective capture irregularly shaped fire edges, facilitating localization smoke. Experiments conducted on a demonstrate that achieves promising terms both accuracy speed. attains mean Average Precision (mAP) 79.28%, F1 score 76.14%, processing speed 8.98 frames per second (FPS). These results reflect increases 4.27%, 1.96%, 0.16 FPS compared YOLOv10 model. Ablation studies validate incorporation augmentation, models, substantial improvements over findings highlight framework’s capability rapidly effectively identify using imagery, thereby providing valuable foundation proactive measures.

Язык: Английский

Процитировано

0

An Improved Unmanned Aerial Vehicle Forest Fire Detection Model Based on YOLOv8 DOI Creative Commons
Bensheng Yun, Xiaohan Xu, Jie Zeng

и другие.

Fire, Год журнала: 2025, Номер 8(4), С. 138 - 138

Опубликована: Март 31, 2025

Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms emerged as favored trend seen extensive application. However, in process drone monitoring, often appear very small are easily obstructed by trees, which greatly limits amount effective information that can extract. Meanwhile, considering limitations unmanned aerial vehicles, algorithm model also needs lightweight characteristics. To address challenges such targets, occlusions, image blurriness UAV-captured wildfire images, this paper proposes an improved based YOLOv8. Firstly, we incorporate SPDConv modules, enhancing YOLOv8 architecture boosting its efficacy dealing with minor objects images low resolution. Secondly, introduce C2f-PConv module, effectively improves computational reducing redundant calculations memory access. Thirdly, boosts classification precision through integration Mixed Local Channel Attention (MLCA) strategy preceding three outputs. Finally, W-IoU loss function utilized, adaptively modifies weights for different target boxes within computation, efficiently difficulties associated detecting targets. The experimental results showed accuracy our increased 2.17%, recall 5.5%, [email protected] 1.9%. In addition, number parameters decreased 43.8%, only 5.96M parameters, while size GFlops 43.3% 36.7%, respectively. Our not reduces complexity, but exhibits superior effectiveness recognition tasks, thereby offering robust reliable solution monitoring.

Язык: Английский

Процитировано

0

Fusion of Deep Features of Wavelet Transform for Wildfire Detection DOI Creative Commons
Akbar Asgharzadeh-Bonab,

Salar Ghamati,

Farid Ahmadi

и другие.

Advances in Multimedia, Год журнала: 2025, Номер 2025(1)

Опубликована: Янв. 1, 2025

Forests uniquely deliver different vital resources, particularly oxygen and carbon dioxide purification. Wildfire is the leading cause of deforestation, where massive forest areas are annually lost due to failure identify predict fires. Accordingly, early detection wildfires crucial inform operational firefighting teams prevent fires from advancing. This study analyzes images taken by unmanned aerial vehicles for wildfire detection. For this purpose, two‐dimensional discrete wavelet transform was first performed on images. Next, its superior ability, a convolutional neural network utilized extract deep features sub‐bands. Then, obtained each sub‐band were merged create final feature vector. Afterward, multidimensional scaling employed reduce extracted non‐useful features. Ultimately, presence or absence locations in detected using proper classifiers. The proposed method reaches an accuracy F 1 score 0.9684 0.9672, respectively, FLAME dataset, indicating efficiency detecting locations. Thus, can significantly contribute on‐time prompt operations extensive damage forests.

Язык: Английский

Процитировано

0

Intelligent Firefighting Technology for Drone Swarms with Multi-Sensor Integrated Path Planning: YOLOv8 Algorithm-Driven Fire Source Identification and Precision Deployment Strategy DOI Creative Commons

Bingxin Yu,

Shengze Yu,

Yuandi Zhao

и другие.

Drones, Год журнала: 2025, Номер 9(5), С. 348 - 348

Опубликована: Май 3, 2025

This study aims to improve the accuracy of fire source detection, efficiency path planning, and precision firefighting operations in drone swarms during emergencies. It proposes an intelligent technology for based on multi-sensor integrated planning. The integrates You Only Look Once version 8 (YOLOv8) algorithm its optimization strategies enhance real-time detection capabilities. Additionally, this employs data fusion swarm cooperative path-planning techniques optimize deployment materials flight paths, thereby improving precision. First, a deformable convolution module is introduced into backbone network YOLOv8 enable flexibly adjust receptive field when processing targets, enhancing accuracy. Second, attention mechanism incorporated neck portion YOLOv8, which focuses feature regions, significantly reducing interference from background noise further recognition complex environments. Finally, new High Intersection over Union (HIoU) loss function proposed address challenge computing localization classification targets. dynamically adjusts weight various components training, achieving more precise classification. In terms visual sensors, infrared LiDAR sensors adopts Information Acquisition Optimizer (IAO) Catch Fish Optimization Algorithm (CFOA) plan paths coordinated swarms. By adjusting planning locations, can reach sources shortest possible time carry out operations. Experimental results demonstrate that improves by optimizing algorithm, algorithms, strategies. optimized achieved 94.6% small fires, with false rate reduced 5.4%. wind speed compensation strategy effectively mitigated impact material deployment. not only enhances but also enables rapid response scenarios, offering broad application prospects, particularly urban forest disaster rescue.

Язык: Английский

Процитировано

0

Enhancing Road Safety: Real-Time Distracted Driver Detection Using Nvidia Jetson Nano and YOLOv8 DOI
Osamah N. Neamah, Tarik Adnan Almohamad, Raif Bayır

и другие.

Опубликована: Май 22, 2024

Язык: Английский

Процитировано

3

FGYOLO: An Integrated Feature Enhancement Lightweight Unmanned Aerial Vehicle Forest Fire Detection Framework Based on YOLOv8n DOI Open Access

Yangyang Zheng,

Fazhan Tao, Zhengyang Gao

и другие.

Forests, Год журнала: 2024, Номер 15(10), С. 1823 - 1823

Опубликована: Окт. 18, 2024

To address the challenges of complex backgrounds and small, easily confused fire smoke targets in Unmanned Aerial Vehicle (UAV)-based forest detection, we propose an improved detection algorithm based on YOLOv8. Considering limited computational resources UAVs lightweight property YOLOv8n, original model YOLOv8n is improved, Bottleneck module reconstructed using Group Shuffle Convolution (GSConv), residual structure thereby enhancing model’s capability while reducing network parameters. The GBFPN proposed to optimize neck layer fusion method, enabling more effective extraction pyrotechnic features. Recognizing difficulty capturing prominent characteristics a complex, tree-heavy environment, implemented BiFormer attention mechanism boost ability acquire multi-scale properties retaining fine-grained Additionally, Inner-MPDIoU loss function replace CIoU function, improving capacity for detecting small targets. experimental results customized G-Fire dataset reveal that FGYOLO achieves 3.3% improvement mean Average Precision (mAP), reaching 98.8%, number parameters by 26.4% compared YOLOv8n.

Язык: Английский

Процитировано

3

A Lightweight Neural Network for the Real-Time Dehazing of Tidal Flat UAV Images Using a Contrastive Learning Strategy DOI Creative Commons

Denghao Yang,

Zhiyu Zhu,

Huilin Ge

и другие.

Drones, Год журнала: 2024, Номер 8(7), С. 314 - 314

Опубликована: Июль 10, 2024

In the maritime environment, particularly within tidal flats, frequent occurrence of sea fog significantly impairs quality images captured by unmanned aerial vehicles (UAVs). This degradation manifests as a loss detail, diminished contrast, and altered color profiles, which directly impact accuracy effectiveness monitoring data result in delays execution response speed tasks. Traditional physics-based dehazing algorithms have limitations terms detail recovery restoration, while neural network are limited their real-time application on devices with constrained resources due to model size. To address above challenges, following study, an advanced algorithm specifically designed for UAVs over flats is introduced. The integrates dense convolutional blocks enhance feature propagation reducing number parameters, thereby improving timeliness process. Additionally, attention mechanism introduced assign variable weights individual channels pixels, enhancing network’s ability perform processing. Furthermore, inspired contrastive learning, employs hybrid function that combines mean squared error regularization. plays crucial role contrast saturation dehazed images. Our experimental results indicate that, compared existing methods, proposed has parameter size only 0.005 M latency 0.523 ms. When applied real flat image dataset, achieved peak signal-to-noise ratio (PSNR) improvement 2.75 (MSE) reduction 9.72. During qualitative analysis, generated high-quality results, characterized natural enhancement contrast. These findings confirm performs exceptionally well removal from UAV-captured images, enabling effective timely these environments.

Язык: Английский

Процитировано

2