An Efficient Group Convolution and Feature Fusion Method for Weed Detection DOI Creative Commons

Chaowen Chen,

Ying Zang,

Jinkang Jiao

и другие.

Agriculture, Год журнала: 2024, Номер 15(1), С. 37 - 37

Опубликована: Дек. 27, 2024

Weed detection is a crucial step in achieving intelligent weeding for vegetables. Currently, research on vegetable weed technology relatively limited, and existing methods still face challenges due to complex natural conditions, resulting low accuracy efficiency. This paper proposes the YOLOv8-EGC-Fusion (YEF) model, an enhancement based YOLOv8 address these challenges. model introduces plug-and-play modules: (1) The Efficient Group Convolution (EGC) module leverages convolution kernels of various sizes combined with group techniques significantly reduce computational cost. Integrating this EGC C2f creates C2f-EGC module, strengthening model’s capacity grasp local contextual information. (2) Context Anchor Attention (GCAA) strengthens capture long-range information, contributing improved feature comprehension. (3) GCAA-Fusion effectively merges multi-scale features, addressing shallow loss preserving critical Leveraging PAFPN, we developed Adaptive Feature Fusion (AFF) pyramid structure that amplifies extraction capabilities. To ensure effective evaluation, collected diverse dataset images from fields. A series comparative experiments was conducted verify effectiveness YEF model. results show outperforms original Faster R-CNN, RetinaNet, TOOD, RTMDet, YOLOv5 performance. metrics achieved by are as follows: precision 0.904, recall 0.88, F1 score 0.891, mAP0.5 0.929. In conclusion, demonstrates high identification, meeting requirements precise detection.

Язык: Английский

Multi-Crop Navigation Line Extraction Based on Improved YOLO-v8 and Threshold-DBSCAN under Complex Agricultural Environments DOI Creative Commons

Jiayou Shi,

Yuhao Bai, Jun Zhou

и другие.

Agriculture, Год журнала: 2023, Номер 14(1), С. 45 - 45

Опубликована: Дек. 26, 2023

Field crops are usually planted in rows, and accurate identification extraction of crop row centerline is the key to realize autonomous navigation safe operation agricultural machinery. However, diversity species morphology, as well field noise such weeds light, often lead poor detection complex farming environments. In addition, curvature rows also poses a challenge safety farm machinery during travel. this study, combined multi-crop algorithm proposed based on improved YOLOv8 (You Only Look Once-v8) model, threshold DBSCAN (Density-Based Spatial Clustering Applications with Noise) clustering, least squares method, B-spline curves. For multiple crops, DCGA-YOLOv8 model developed by introducing deformable convolution global attention mechanism (GAM) original model. The introduction can obtain more fine-grained spatial information adapt different sizes shapes, while combination GAM pay important feature areas crops. experimental results shown that F1-score mAP value for Cabbage, Kohlrabi, Rice 96.4%, 97.1%, 95.9% 98.9%, 99.2%, 99.1%, respectively, which has good generalization robustness. A threshold-DBSCAN was implement clustering each correct rate Kohlrabi reaches 97.9%, 100%, respectively. And LSM cubic curve methods were applied fit straight curved study constructed risk optimization function wheel further improve machines operating between rows. This indicates method effectively recognition lines farmland environment, stability visual machines.

Язык: Английский

Процитировано

13

Improved Tomato Leaf Disease Recognition Based on the YOLOv5m with Various Soft Attention Module Combinations DOI Creative Commons
Yong-Suk Lee, Maheshkumar Prakash Patil,

Jeong Gyu Kim

и другие.

Agriculture, Год журнала: 2024, Номер 14(9), С. 1472 - 1472

Опубликована: Авг. 29, 2024

To reduce production costs, environmental effects, and crop losses, tomato leaf disease recognition must be accurate fast. Early diagnosis treatment are necessary to cure control illnesses ensure output quality. The YOLOv5m was improved by using C3NN modules Bidirectional Feature Pyramid Network (BiFPN) architecture. were designed integrating several soft attention into the C3 module: Convolutional Block Attention Module (CBAM), Squeeze Excitation (SE), Efficient Channel (ECA), Coordinate (CA). in Backbone Head of YOLOv5 model replaced with improve feature representation object detection accuracy. BiFPN architecture implemented Neck effectively merge multi-scale features accuracy detection. Among various combinations for model, C3ECA-BiFPN-C3ECA-YOLOv5m achieved a precision (P) 87.764%, recall (R) 87.201%, an F1 87.482, mAP.5 90.401%, mAP.5:.95 68.803%. In comparison Faster-RCNN models, models showed improvement P 1.36% 7.80%, R 4.99% 5.51%, 3.18% 6.86%, 1.74% 2.90%, 3.26% 4.84%, respectively. These results demonstrate that have effective capabilities expected contribute significantly development plant technology.

Язык: Английский

Процитировано

5

Lightweight Pig Face Feature Learning Evaluation and Application Based on Attention Mechanism and Two-Stage Transfer Learning DOI Creative Commons
Zhe Yin,

Mingkang Peng,

Zhaodong Guo

и другие.

Agriculture, Год журнала: 2024, Номер 14(1), С. 156 - 156

Опубликована: Янв. 21, 2024

With the advancement of machine vision technology, pig face recognition has garnered significant attention as a key component in establishment precision breeding models. In order to explore non-contact individual recognition, this study proposes lightweight feature learning method based on mechanism and two-stage transfer learning. Using combined approach online offline data augmentation, both self-collected dataset from Shanxi Agricultural University's grazing station public datasets underwent enhancements terms quantity quality. YOLOv8 was employed for extraction fusion images. The Coordinate Attention (CA) module integrated into model enhance critical features. Fine-tuning network conducted establish achieved mean average (mAP) 97.73% learning, surpassing models such EfficientDet, SDD, YOLOv5, YOLOv7-tiny, swin_transformer by 0.32, 1.23, 1.56, 0.43 0.14 percentage points, respectively. YOLOv8-CA model’s mAP reached 98.03%, 0.3 point improvement before its addition. Furthermore, learning-based 95.73%, exceeding backbone pre-trained weight 10.92 3.13 method, effectively captures unique This serves valuable reference achieving breeding.

Язык: Английский

Процитировано

4

Accurate cotton verticillium wilt segmentation in field background based on the two-stage lightweight DeepLabV3+ model DOI
Ying Xu, Benxue Ma, G. Yu

и другие.

Computers and Electronics in Agriculture, Год журнала: 2024, Номер 229, С. 109814 - 109814

Опубликована: Дек. 18, 2024

Язык: Английский

Процитировано

2

YOLO-RCS: A method for detecting phenological period of 'Yuluxiang' pear in unstructured environment DOI
Rui Ren, Shujuan Zhang, Haixia Sun

и другие.

Computers and Electronics in Agriculture, Год журнала: 2024, Номер 229, С. 109819 - 109819

Опубликована: Дек. 20, 2024

Язык: Английский

Процитировано

2

Object Detection Based on an Improved YOLOv7 Model for Unmanned Aerial-Vehicle Patrol Tasks in Controlled Areas DOI Open Access
Dewei Zhao, Faming Shao, Yang Li

и другие.

Electronics, Год журнала: 2023, Номер 12(23), С. 4887 - 4887

Опубликована: Дек. 4, 2023

When working with objects on a smaller scale, higher detection accuracy and faster speed are desirable features. Researchers aim to endow drones these attributes in order improve performance when patrolling controlled areas for object detection. In this paper, we propose an improved YOLOv7 model. By incorporating the variability attention module into backbone network of original model, association between distant pixels is increased, resulting more effective feature extraction and, thus, model accuracy. improving deformable convolution modules depthwise separable modules, enhances semantic information small reduces number parameters certain extent. Pretraining fine-tuning techniques used training, retrained VisDrone2019 dataset. Using dataset, achieves mAP50 52.3% validation set. Through visual comparative analysis results our set, find that shows significant improvement detecting compared previous iterations.

Язык: Английский

Процитировано

5

Improved Architecture and Training Strategies of YOLOv7 for Remote Sensing Image Object Detection DOI Creative Commons
Dewei Zhao, Faming Shao, Qiang Liu

и другие.

Remote Sensing, Год журнала: 2024, Номер 16(17), С. 3321 - 3321

Опубликована: Сен. 7, 2024

The technology for object detection in remote sensing images finds extensive applications production and people’s lives, improving the accuracy of image is a pressing need. With that goal, this paper proposes range improvements, rooted widely used YOLOv7 algorithm, after analyzing requirements difficulties images. Specifically, we strategically remove some standard convolution pooling modules from bottom network, adopting stride-free to minimize loss information small objects transmission. Simultaneously, introduce new, more efficient attention mechanism module feature extraction, significantly enhancing network’s semantic extraction capabilities. Furthermore, by adding multiple cross-layer connections effectively utilize each layer backbone thereby overall capability. During training phase, an auxiliary network intensify underlying adopt new activation function ensure effective gradient feedback, elevating performance. In experimental results, our improved achieves impressive mAP scores 91.2% 80.8% on DIOR DOTA version 1.0 datasets, respectively. These represent notable improvements 4.5% 7.0% over original efficiency detecting particular.

Язык: Английский

Процитировано

1

Embedded YOLO v8: Real-time detection of sugarcane nodes in complex natural environments by rapid structural pruning method DOI
Shanshan Hu,

Guoxin Tang,

Kang Yu

и другие.

Measurement, Год журнала: 2024, Номер unknown, С. 116291 - 116291

Опубликована: Ноя. 1, 2024

Язык: Английский

Процитировано

1

Monitoring Dairy Cow Rumination Behavior Based on Upper and Lower Jaw Tracking DOI Creative Commons

Ning Wang,

Xincheng Li,

Shuqi Shang

и другие.

Agriculture, Год журнала: 2024, Номер 14(11), С. 2006 - 2006

Опубликована: Ноя. 8, 2024

To address behavioral interferences such as head turning and lowering during rumination in group-housed dairy cows, an enhanced network algorithm combining the YOLOv5s DeepSort algorithms was developed. Initially, improvements were made to by incorporating C3_CA module into backbone enhance feature interaction representation at different levels. The Slim_Neck paradigm employed strengthen extraction fusion, CIoU loss function replaced with WIoU improve model’s robustness generalization, establishing it a detector of upper lower jaws cows. Subsequently, tracking utilized track plot their movement trajectories. By calculating difference between centroid coordinates boxes for rumination, curve obtained. Finally, number chews false detection rate calculated. system successfully monitored frequency cows’ chewing actions rumination. experimental results indicate that model achieved mean average precision ([email protected]) 97.5% 97.9% jaws, respectively, (P) 95.4% 97.4% recall (R) 97.6% 98.4%, respectively. Two methods determining proposed, which showed rates 8.34% 3.08% after validation. research findings validate feasibility jaw method, providing reference real-time monitoring behavior cows group housing environments.

Язык: Английский

Процитировано

0

Enhanced Aquaculture Monitoring: Real-time Fish Detection Underwater with YOLOv7 DOI
John Paul Q. Tomas,

Jimuel A. Cacayan,

Jeremy G. Francisco

и другие.

Опубликована: Авг. 2, 2024

Язык: Английский

Процитировано

0