WeedAI-UAV: Weed detection using YOLOv8 and geolocation for treatment with UAVs DOI
Paula Catala-Roman, Esther Dura, Miguel García

et al.

Published: July 3, 2024

Language: Английский

Image acquisition and processing techniques DOI

H. Mann,

M. Naeimi,

Vishvam Porwal

et al.

Elsevier eBooks, Journal Year: 2025, Volume and Issue: unknown, P. 181 - 201

Published: Jan. 1, 2025

Citations

0

Intelligent weed management using aerial image processing and precision herbicide spraying: An overview DOI Creative Commons
Armin Ehrampoosh,

Pushpika Hettiarachchi,

Anand Koirala

et al.

Crop Protection, Journal Year: 2025, Volume and Issue: unknown, P. 107206 - 107206

Published: March 1, 2025

Language: Английский

Citations

0

MSEA-Net: Multi-Scale and Edge-Aware Network for Weed Segmentation DOI Creative Commons

Azhar Hussain Quadri Syed,

Baifan Chen, Adeel Abbasi

et al.

AgriEngineering, Journal Year: 2025, Volume and Issue: 7(4), P. 103 - 103

Published: April 3, 2025

Accurate weed segmentation in Unmanned Aerial Vehicle (UAV) imagery remains a significant challenge precision agriculture due to environmental variability, weak contextual representation, and inaccurate boundary detection. To address these limitations, we propose the Multi-Scale Edge-Aware Network (MSEA-Net), lightweight efficient deep learning framework designed enhance accuracy while maintaining computational efficiency. Specifically, introduce Spatial-Channel Attention (MSCA) module recalibrate spatial channel dependencies, improving local–global feature fusion reducing redundant computations. Additionally, Edge-Enhanced Bottleneck (EEBA) integrates Sobel-based edge detection refine delineation, ensuring sharper object separation dense vegetation environments. Extensive evaluations on publicly available datasets demonstrate effectiveness of MSEA-Net, achieving mean Intersection over Union (IoU) 87.42% Motion-Blurred UAV Images Sorghum Fields dataset 71.35% CoFly-WeedDB dataset, outperforming benchmark models. MSEA-Net also maintains compact architecture with only 6.74 M parameters model size 25.74 MB, making it suitable for UAV-based real-time segmentation. These results highlight potential automated efficiency deployment.

Language: Английский

Citations

0

Weed Instance Segmentation from UAV Orthomosaic Images based on Deep Learning DOI Creative Commons

Chenghao Lu,

Klaus Gehring,

Stefan Kopfinger

et al.

Smart Agricultural Technology, Journal Year: 2025, Volume and Issue: unknown, P. 100966 - 100966

Published: April 1, 2025

Language: Английский

Citations

0

Detection and Multi-Class Classification of Invasive Knotweeds with Drones and Deep Learning Models DOI Creative Commons

Sruthi Keerthi Valicharla,

Roghaiyeh Karimzadeh, Kushal Naharki

et al.

Drones, Journal Year: 2024, Volume and Issue: 8(7), P. 293 - 293

Published: June 28, 2024

Invasive knotweeds are rhizomatous and herbaceous perennial plants that pose significant ecological threats due to their aggressive growth ability outcompete native plants. Although detecting identifying is crucial for effective management, current ground-based survey methods labor-intensive limited cover large hard-to-access areas. This study was conducted determine the optimum flight height of drones aerial detection at different phenological stages develop automated on images using state-of-the-art Swin Transformer. The results this found that, vegetative stage, Japanese knotweed giant were detectable ≤35 m ≤25 m, respectively, above canopy an RGB sensor. flowers ≤20 m. Thermal multispectral sensors not able detect any species. Transformer achieved higher precision, recall, accuracy in acquired with than conventional convolutional neural networks (CNNs). demonstrated use drones, sensors, deep learning revolutionizing invasive detection.

Language: Английский

Citations

2

TCSNet: A New Individual Tree Crown Segmentation Network from Unmanned Aerial Vehicle Images DOI Open Access

Yue Chi,

Chenxi Wang, Zhulin Chen

et al.

Forests, Journal Year: 2024, Volume and Issue: 15(10), P. 1814 - 1814

Published: Oct. 17, 2024

As the main area for photosynthesis in trees, canopy absorbs a large amount of carbon dioxide and plays an irreplaceable role regulating cycle atmosphere mitigating climate change. Therefore, monitoring growth is crucial. However, traditional field investigation methods are often limited by time-consuming labor-intensive methods, as well limitations coverage, which may result incomplete inaccurate assessments. In response to challenges encountered application tree crown segmentation algorithms, such adhesion between individual crowns insufficient generalization ability algorithm, this study proposes improved algorithm based on Mask R-CNN (Mask Region-based Convolutional Neural Network), identifies irregular edges RGB images obtained from drones. Firstly, it optimizes backbone network improving ResNeXt embedding SENet (Squeeze-and-Excitation Networks) module enhance model’s feature extraction capability. Secondly, BiFPN-CBAM introduced enable model learn utilize features more effectively. Finally, mask loss function Boundary-Dice further improve effect. study, TCSNet also incorporated concept panoptic segmentation, achieving coherent consistent throughout entire scene through fine boundary recognition integration. was tested three datasets with different geographical environments forest types, namely artificial forests, natural urban forests performing best. Compared original dataset, precision increased 6.6%, recall rate 1.8%, F1-score 4.2%, highlighting its potential robustness detection segmentation.

Language: Английский

Citations

2

Morning Glory Flower Detection in Aerial Images Using Semi-Supervised Segmentation with Gaussian Mixture Models DOI Creative Commons

Sruthi Keerthi Valicharla,

Jinge Wang, Xin Li

et al.

AgriEngineering, Journal Year: 2024, Volume and Issue: 6(1), P. 555 - 573

Published: March 1, 2024

The invasive morning glory, Ipomoea purpurea (Convolvulaceae), poses a mounting challenge in vineyards by hindering grape harvest and as secondary host of disease pathogens, necessitating advanced detection control strategies. This study introduces novel automated image analysis framework using aerial images obtained from small fixed-wing unmanned aircraft system (UAS) an RGB camera for the large-scale I. flowers. aimed to assess sampling fidelity comparison with actual infestation measured ground validation surveys. UAS was systematically operated over 16 vineyard plots infested another without infestation. We used semi-supervised segmentation model incorporating Gaussian Mixture Model (GMM) Expectation-Maximization algorithm detect count flower detectability GMM compared that conventional K-means methods. results this showed detected presence flowers all 0% both type I II errors, while method had 6.3% respectively. methods 76% 65% flowers, These underscore effectiveness GMM-based accurately detecting quantifying approach. demonstrated efficiency coupled vineyards, achieving success relying on data-driven deep-learning models.

Language: Английский

Citations

1

Weed detection in precision agriculture: leveraging encoder-decoder models for semantic segmentation DOI

S. Thiagarajan,

A Vijayalakshmi,

G. Hannah Grace

et al.

Journal of Ambient Intelligence and Humanized Computing, Journal Year: 2024, Volume and Issue: 15(9), P. 3547 - 3561

Published: July 12, 2024

Language: Английский

Citations

1

Comparative approach on crop detection using machine learning and deep learning techniques DOI

V. Nithya,

M. S. Josephine,

V. Jeyabalaraja

et al.

International Journal of Systems Assurance Engineering and Management, Journal Year: 2024, Volume and Issue: 15(9), P. 4636 - 4648

Published: Aug. 23, 2024

Language: Английский

Citations

1

Advanced Image Preprocessing and Integrated Modeling for UAV Plant Image Classification DOI Creative Commons
Girma Tariku,

Isabella Ghiglieno,

Anna Simonetto

et al.

Drones, Journal Year: 2024, Volume and Issue: 8(11), P. 645 - 645

Published: Nov. 6, 2024

The automatic identification of plant species using unmanned aerial vehicles (UAVs) is a valuable tool for ecological research. However, challenges such as reduced spatial resolution due to high-altitude operations, image degradation from camera optics and sensor limitations, information loss caused by terrain shadows hinder the accurate classification UAV imagery. This study addresses these issues proposing novel preprocessing pipeline evaluating its impact on model performance. Our approach improves quality through multi-step that includes Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) enhancement, Contrast-Limited Adaptive Histogram Equalization (CLAHE) contrast improvement, white balance adjustments color representation. These steps ensure high-quality input data, leading better For feature extraction classification, we employ pre-trained VGG-16 deep convolutional neural network, followed machine learning classifiers, including Support Vector Machine (SVM), random forest (RF), Extreme Gradient Boosting (XGBoost). hybrid approach, combining with not only enhances accuracy but also reduces computational resource requirements compared relying solely models. Notably, + SVM achieved an outstanding 97.88% dataset preprocessed ESRGAN adjustments, precision 97.9%, recall 97.8%, F1 score 0.978. Through comprehensive comparative study, demonstrate proposed framework, utilizing extraction, images achieves superior performance in

Language: Английский

Citations

1