PMDNet: An Improved Object Detection Model for Wheat Field Weed DOI Creative Commons

Zhengyuan Qi,

Jun Wang

Agronomy, Journal Year: 2024, Volume and Issue: 15(1), P. 55 - 55

Published: Dec. 28, 2024

Efficient and accurate weed detection in wheat fields is critical for precision agriculture to optimize crop yield minimize herbicide usage. The dataset was created, encompassing 5967 images across eight well-balanced categories, it comprehensively covers the entire growth cycle of spring as well associated species observed throughout this period. Based on dataset, PMDNet, an improved object model built upon YOLOv8 architecture, introduced optimized field tasks. PMDNet incorporates Poly Kernel Inception Network (PKINet) backbone, self-designed Multi-Scale Feature Pyramid (MSFPN) multi-scale feature fusion, Dynamic Head (DyHead) head, resulting significant performance improvements. Compared baseline YOLOv8n model, increased [email protected] from 83.6% 85.8% (+2.2%) [email protected]:0.95 65.7% 69.6% (+5.9%). Furthermore, outperformed several classical single-stage two-stage models, achieving highest (94.5%, 14.1% higher than Faster-RCNN) (85.8%, 5.4% RT-DETR-L). Under stricter metric, reached 69.6%, surpassing Faster-RCNN by 16.7% RetinaNet 13.1%. Real-world video tests further validated PMDNet’s practicality, 87.7 FPS demonstrating high detecting weeds complex backgrounds small targets. These advancements highlight potential practical applications agriculture, providing a robust solution management contributing development sustainable farming practices.

Language: Английский

Neural networks as a support element of phytosanitary monitoring of fruit crops on the example of apple trees DOI Open Access
Alexey Kutyrev, Igor Smirnov, M. S. Pryakhina

et al.

Horticulture and viticulture, Journal Year: 2025, Volume and Issue: 6, P. 51 - 59

Published: Jan. 22, 2025

The paper presents the results of developing a convolutional neural network model for detecting and classifying diseases based on images apple tree leaves fruits. study involves transfer learning YOLOv10-X (You Only Look Once, version 10, Extra-large), pre-trained public COCO dataset (Common Objects in Context), which includes over 200,000 millions annotated objects. training was compiled Research Production Department Federal Horticultural Center Breeding, Agrotechnology Nursery (Russia). Artificial augmentation by rotating images, adding noise, changing tints shades increased to 2200 images. Precision Recall metrics, as well mean Average (mAP) metric, were used evaluate performance model. demonstrated that effectively recognizes leaf lesions caused scab, powdery mildew, rust, various types spots, achieving 0.6. “spot” class appeared be most difficult recognize (mAP50=0.411; Recall=0.324), while “rust” revealed least difficulty (mAP=0.868; Recall=0.803). contributed optimizing parameters, including confidence threshold (0.48), rate (0.01), number epochs (313) batchsize (8). Testing robotic platform equipped with RGB cameras indicated automatic data collection at high frequency enables effective real-time monitoring lesion dynamics.

Language: Английский

Citations

0

CEFW-YOLO: A High-Precision Model for Plant Leaf Disease Detection in Natural Environments DOI Creative Commons
J. Tao, Xiaoli Li, Yong He

et al.

Agriculture, Journal Year: 2025, Volume and Issue: 15(8), P. 833 - 833

Published: April 12, 2025

The accurate and rapid detection of apple leaf diseases is a critical component precision management in orchards. existing deep-learning-based algorithms for typically demand high computational resources, which limits their practical applicability orchard environments. Furthermore, the natural settings faces significant challenges due to diversity disease types, varied morphology affected areas, influence factors such as lighting variations, occlusions, differences severity. To address above challenges, we constructed an (ALD) dataset, was collected from real-world scenarios, applied data augmentation techniques, resulting total 9808 images. Based on ALD proposed lightweight YOLO11n-based network, named CEFW-YOLO, designed tackle current issues identification. First, novel channel-wise squeeze convolution (CWSConv), employs channel compression standard reduce resource consumption, enhance small objects, improve model’s adaptability morphological complex backgrounds. Second, developed enhanced cross-channel attention (ECCAttention) module integrated it into C2PSA_ECCAttention module. By extracting global information, combining horizontal vertical convolutions, strengthening interactions, this enables model more accurately capture features leaves, thereby enhancing accuracy robustness. Additionally, introduced new fine-grained multi-level linear (FMLAttention) module, utilizes asymmetric convolutions mechanisms ability local details detection. Finally, incorporated Wise-IoU (WIoU) loss function, enhances differentiate overlapping targets across multiple scales. A comprehensive evaluation CEFW-YOLO conducted, comparing its performance against state-of-the-art (SOTA) models. achieved 20.6% reduction complexity. Compared original YOLO11n, improved by 3.7%, with [email protected] [email protected]:0.95 increasing 7.6% 5.2%, respectively. Notably, outperformed advanced SOTA detection, underscoring application potential scenarios.

Language: Английский

Citations

0

LCDDN-YOLO: Lightweight Cotton Disease Detection in Natural Environment, Based on Improved YOLOv8 DOI Creative Commons
Haoran Feng, Xiqu Chen,

Zhaoyan Duan

et al.

Agriculture, Journal Year: 2025, Volume and Issue: 15(4), P. 421 - 421

Published: Feb. 17, 2025

To address the challenges of detecting cotton pests and diseases in natural environments, as well similarities features exhibited by diseases, a Lightweight Cotton Disease Detection Natural Environment (LCDDN-YOLO) algorithm is proposed. The LCDDN-YOLO based on YOLOv8n, replaces part convolutional layers backbone network with Distributed Shift Convolution (DSConv). BiFPN incorporated into original architecture, adding learnable weights to evaluate significance various input features, thereby enhancing detection accuracy. Furthermore, it integrates Partial (PConv) (DSConv) C2f module, called PDS-C2f. Additionally, CBAM attention mechanism neck improve model performance. A Focal-EIoU loss function also integrated optimize model’s training process. Experimental results show that compared YOLOv8, reduces number parameters 12.9% floating-point operations (FLOPs) 9.9%, while precision, mAP@50, recall 4.6%, 6.5%, 7.8%, respectively, reaching 89.5%, 85.4%, 80.2%. In summary, offers excellent accuracy speed, making effective for pest disease control fields, particularly lightweight computing scenarios.

Language: Английский

Citations

0

RDRM-YOLO: A High-Accuracy and Lightweight Rice Disease Detection Model for Complex Field Environments Based on Improved YOLOv5 DOI Creative Commons
Pan Li,

J Zhou,

Huihui Sun

et al.

Agriculture, Journal Year: 2025, Volume and Issue: 15(5), P. 479 - 479

Published: Feb. 23, 2025

Rice leaf diseases critically threaten global rice production by reducing crop yield and quality. Efficient disease detection in complex field environments remains a persistent challenge for sustainable agriculture. Existing deep learning-based methods struggle with inadequate sensitivity to subtle features, high computational complexity, degraded accuracy under conditions, such as background interference fine-grained variations. To address these limitations, this research aims develop lightweight yet high-accuracy model tailored that balances efficiency robust performance. We propose RDRM-YOLO, an enhanced YOLOv5-based network, integrating four key improvements: (i) cross-stage partial network fusion module (Hor-BNFA) is integrated within the backbone network’s feature extraction stage enhance model’s ability capture disease-specific features; (ii) spatial depth conversion convolution (SPDConv) introduced expand receptive field, enhancing of particularly from small spots; (iii) SPDConv also into neck where standard replaced GsConv increase localization, category prediction, inference speed; (iv) WIoU Loss function adopted place CIoU accelerate convergence accuracy. The trained evaluated utilizing comprehensive dataset 5930 field-collected augmented sample images comprising prevalent diseases: bacterial blight, blast, brown spot, tungro. Experimental results demonstrate our proposed RDRM-YOLO achieves state-of-the-art performance 94.3%, recall 89.6%. Furthermore, it mean Average Precision (mAP) 93.5%, while maintaining compact size merely 7.9 MB. Compared Faster R-CNN, YOLOv6, YOLOv7, YOLOv8 models, demonstrates faster optimal result values Precision, Recall, mAP, size, speed. This work provides practical solution real-time monitoring agricultural fields, offering very effective balance between simplicity enhancements are readily adaptable other tasks, thereby contributing advancement precision agriculture technologies.

Language: Английский

Citations

0

YOLOv11-RCDWD: A New Efficient Model for Detecting Maize Leaf Diseases Based on the Improved YOLOv11 DOI Creative Commons
Jie He,

Yi Ren,

Weibin Li

et al.

Applied Sciences, Journal Year: 2025, Volume and Issue: 15(8), P. 4535 - 4535

Published: April 20, 2025

Detecting pests and diseases on maize leaves is challenging. This especially true under complex conditions, such as variable lighting occlusion. Current methods suffer from low detection accuracy. They also lack sufficient real-time performance. Hence, this study introduces the lightweight method YOLOv11-RCDWD based an improved YOLOv11 model. The proposed approach enhances model by incorporating RepLKNet module backbone, which significantly model’s capacity to capture characteristics of leaf diseases. Additionally, CBAM embedded within neck feature extraction network further refine representation augment capability identify select essential features introducing attention mechanisms in both channel spatial dimensions, thereby improving accuracy expression. We have DynamicHead module, WIoU loss function, DynamicATSS label assignment strategy, collectively enhance accuracy, efficiency, robustness through optimized mechanisms, better handling low-quality samples, dynamic sample selection during training. experimental findings indicate that effectively detected leaves. precision reached 92.6%, while recall was 85.4%. F1 score 88.9%, [email protected] [email protected]~0.95 demonstrated improvement 4.9% 9.0% over baseline YOLOv11s. Notably, outperformed other architectures Faster R-CNN, SSD, various models YOLO series, demonstrating superior capabilities terms speed, parameter count, computational memory utilization. achieves optimal balance between performance resource efficiency. Overall, reduces time usage maintaining high supporting automated diseases, offering a robust solution for intelligent monitoring agricultural pests.

Language: Английский

Citations

0

PMDNet: An Improved Object Detection Model for Wheat Field Weed DOI Creative Commons

Zhengyuan Qi,

Jun Wang

Agronomy, Journal Year: 2024, Volume and Issue: 15(1), P. 55 - 55

Published: Dec. 28, 2024

Efficient and accurate weed detection in wheat fields is critical for precision agriculture to optimize crop yield minimize herbicide usage. The dataset was created, encompassing 5967 images across eight well-balanced categories, it comprehensively covers the entire growth cycle of spring as well associated species observed throughout this period. Based on dataset, PMDNet, an improved object model built upon YOLOv8 architecture, introduced optimized field tasks. PMDNet incorporates Poly Kernel Inception Network (PKINet) backbone, self-designed Multi-Scale Feature Pyramid (MSFPN) multi-scale feature fusion, Dynamic Head (DyHead) head, resulting significant performance improvements. Compared baseline YOLOv8n model, increased [email protected] from 83.6% 85.8% (+2.2%) [email protected]:0.95 65.7% 69.6% (+5.9%). Furthermore, outperformed several classical single-stage two-stage models, achieving highest (94.5%, 14.1% higher than Faster-RCNN) (85.8%, 5.4% RT-DETR-L). Under stricter metric, reached 69.6%, surpassing Faster-RCNN by 16.7% RetinaNet 13.1%. Real-world video tests further validated PMDNet’s practicality, 87.7 FPS demonstrating high detecting weeds complex backgrounds small targets. These advancements highlight potential practical applications agriculture, providing a robust solution management contributing development sustainable farming practices.

Language: Английский

Citations

0