Classification, Localization and Quantization of Eddy Current Detection Defects in CFRP Based on EDC-YOLO DOI Creative Commons

Robert K. Wen,

Chongcong Tao,

Hongli Ji

et al.

Sensors, Journal Year: 2024, Volume and Issue: 24(20), P. 6753 - 6753

Published: Oct. 21, 2024

The accurate detection and quantification of defects is vital for the effectiveness eddy current nondestructive testing (ECNDT) carbon fiber-reinforced plastic (CFRP) materials. This study investigates identification measurement three common CFRP defects-cracks, delamination, low-velocity impact damage-by employing You Only Look Once (YOLO) model an improved Eddy Current YOLO (EDC-YOLO) model. YOLO's limitations in detecting multi-scale features are addressed through integration Transformer-based self-attention mechanisms deformable convolutional sub-modules, with additional global feature extraction via CBAM. By leveraging Wise-IoU loss function, performance further enhanced, leading to a 4.4% increase mAP50 defect detection. EDC-YOLO proves be effective industrial inspections, providing detailed insights, such as correlation between damage size energy levels.

Language: Английский

YOLOv8-PoseBoost: Advancements in Multimodal Robot Pose Keypoint Detection DOI Open Access
Feng Wang, Gang Wang, Baoli Lu

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(6), P. 1046 - 1046

Published: March 11, 2024

In the field of multimodal robotics, achieving comprehensive and accurate perception surrounding environment is a highly sought-after objective. However, current methods still have limitations in motion keypoint detection, especially scenarios involving small target detection complex scenes. To address these challenges, we propose an innovative approach known as YOLOv8-PoseBoost. This method introduces Channel Attention Module (CBAM) to enhance network’s focus on targets, thereby increasing sensitivity individuals. Additionally, employ multiple scale heads, enabling algorithm comprehensively detect individuals varying sizes images. The incorporation cross-level connectivity channels further enhances fusion features between shallow deep networks, reducing rate missed detections for We also introduce Scale Invariant Intersection over Union (SIoU) redefined bounding box regression localization loss function, which accelerates model training convergence improves accuracy. Through series experiments, validate YOLOv8-PoseBoost’s outstanding performance targets provides effective solution enhancing execution capabilities robots. It has potential drive development robots across various application domains, holding both theoretical practical significance.

Language: Английский

Citations

6

The application prospects of robot pose estimation technology: exploring new directions based on YOLOv8-ApexNet DOI Creative Commons
Xianfeng Tang, Shuwei Zhao

Frontiers in Neurorobotics, Journal Year: 2024, Volume and Issue: 18

Published: April 5, 2024

Introduction Service robot technology is increasingly gaining prominence in the field of artificial intelligence. However, persistent limitations continue to impede its widespread implementation. In this regard, human motion pose estimation emerges as a crucial challenge necessary for enhancing perceptual and decision-making capacities service robots. Method This paper introduces groundbreaking model, YOLOv8-ApexNet, which integrates advanced technologies, including Bidirectional Routing Attention (BRA) Generalized Feature Pyramid Network (GFPN). BRA facilitates capture inter-keypoint correlations within dynamic environments by introducing bidirectional information propagation mechanism. Furthermore, GFPN adeptly extracts feature across different scales, enabling model make more precise predictions targets various sizes shapes. Results Empirical research findings reveal significant performance enhancements YOLOv8-ApexNet COCO MPII datasets. Compared existing methodologies, demonstrates pronounced advantages keypoint localization accuracy robustness. Discussion The significance lies providing an efficient accurate solution tailored realm robotics, effectively mitigating deficiencies inherent current approaches. By bolstering perception decision-making, our endeavors unequivocally endorse integration robots practical applications.

Language: Английский

Citations

5

YOLO-Fusion and Internet of Things: Advancing object detection in smart transportation DOI Creative Commons
Jun Tang,

Caixian Ye,

Xianlai Zhou

et al.

Alexandria Engineering Journal, Journal Year: 2024, Volume and Issue: 107, P. 1 - 12

Published: Sept. 17, 2024

Language: Английский

Citations

5

OD-YOLO: Robust Small Object Detection Model in Remote Sensing Image with a Novel Multi-Scale Feature Fusion DOI Creative Commons

Yangcheng Bu,

Hairong Ye,

Zhixin Tie

et al.

Sensors, Journal Year: 2024, Volume and Issue: 24(11), P. 3596 - 3596

Published: June 3, 2024

As remote sensing technology has advanced, the use of satellites and similar technologies become increasingly prevalent in daily life. Now, it plays a crucial role hydrology, agriculture, geography. Nevertheless, because distinct qualities sensing, including expansive scenes small, densely packed targets, there are many challenges detecting objects. Those lead to insufficient accuracy object detection. Consequently, developing new model is essential enhance identification capabilities for objects imagery. To solve these constraints, we have designed OD-YOLO approach that uses multi-scale feature fusion improve performance YOLOv8n small target Firstly, traditional convolutions poor recognition certain geometric shapes. Therefore, this paper, introduce Detection Refinement Module (DRmodule) into backbone architecture. This module utilizes Deformable Convolutional Networks Hybrid Attention Transformer strengthen model’s capability extraction from shapes blurred effectively. Meanwhile, based on Feature Pyramid Network YOLO, at head framework, paper enhances detection by introducing Dynamic Head different scales features pyramid. Additionally, address issue images, specifically designs OIoU loss function finely describe difference between box true box, further enhancing performance. Experiments VisDrone dataset show surpasses compared models least 5.2% mAP50 4.4% mAP75, experiments Foggy Cityscapes demonstrated improved mAP 6.5%, demonstrating outstanding results tasks related images adverse weather work not only advances research image analysis, but also provides effective technical support practical deployment future applications.

Language: Английский

Citations

4

YOLOv7‐SFWC: A detection algorithm for illegal manned trucks DOI Creative Commons
Xuan Hui Wu, Yanan Wang,

Tengtao Nie

et al.

IET Image Processing, Journal Year: 2025, Volume and Issue: 19(1)

Published: Jan. 1, 2025

Abstract Automatic analysis and evidence collection of obvious traffic violations, such as illegal manned trucks, is one the critical operational challenges police department's business. For enormous volume road surveillance images generated daily, traditional manual screening highly time‐intensive resource‐draining. Therefore, this article proposes an improved detection model YOLOv7‐SFWC for illegally trucks. First all, pictures vehicles obtained by relevant departments are expanded labeled, dataset created. Building upon foundational YOLOv7 model, study replaces convolution module with FasterNet SCConv module, introduces Wise‐IoU (WIoU) loss function algorithm Coordinate Attention (CA) mechanism. The results show that mAP value 4.15% FPS 7.6 compared original computational complexity reduced to adapt deployment. Moreover, model's effectiveness validated through extensive comparison experiments. Finally, visual accurate performance verify progress YOLOv7‐SFWC. This advancement has potential transform violation enforcement reducing reliance on screening, effectively combating purifying order.

Language: Английский

Citations

0

Waste drilling fluid flocculation identification method based on improved YOLOv8n DOI
Min Wan,

Xin Yang,

Huaibang Zhang

et al.

Review of Scientific Instruments, Journal Year: 2025, Volume and Issue: 96(1)

Published: Jan. 1, 2025

Efficient identification of the flocculation state waste drilling fluid remains a significant challenge. This study proposes an improved You Only Look Once version 8 nano-algorithm (YOLOv8n), specifically optimized for real-time monitoring under field conditions. The algorithm employs MobileNetV3 as backbone network to minimize memory usage, improve detection speed, and reduce computational requirements. integration efficient multi-scale attention mechanism into cross-stage partial fusion module effectively mitigates detail loss, resulting in performance images with high similarity. wise intersection over union loss function is employed accelerate bounding box convergence inference accuracy. Experimental results show that enhanced YOLOv8n achieves average recognition accuracy 98.6% on experimental dataset, 4.8% improvement original model. In addition, model size parameter count are reduced 2.9 MB 2.8 Giga Floating-Point Operations Per Second (GFLOPS), respectively, compared model, reflecting reduction 3.2 5.3 GFLOPS. As result, proposed highly deployable predicts changes across varying working

Language: Английский

Citations

0

A multi-objective dynamic detection model in autonomous driving based on an improved YOLOv8 DOI
Chaoran Li,

Yinghui Zhu,

Min Zheng

et al.

Alexandria Engineering Journal, Journal Year: 2025, Volume and Issue: 122, P. 453 - 464

Published: March 18, 2025

Language: Английский

Citations

0

Adapting AI-based speed violation detection systems for Africa: A case study with Nigeria DOI

Emmanuel Tobechukwu Ugboko,

Sung Bae Jo

African Transport Studies, Journal Year: 2025, Volume and Issue: 3, P. 100035 - 100035

Published: Jan. 1, 2025

Language: Английский

Citations

0

Small object detection in remote sensing images through multi-scale feature fusion DOI
Sumin Li,

Jinhua Lin,

Yijin Gang

et al.

The Computer Journal, Journal Year: 2025, Volume and Issue: unknown

Published: April 14, 2025

Abstract Due to the challenges posed by background noise and limited information available for small targets in remote sensing images, detection performance such remains unsatisfactory. To address these issues enhance accuracy, we propose an improved algorithm based on RTDETR, named Adaptive Selective Transformer. Firstly, feature extraction network, introduce adaptive convolutional enhancement module improve multi-scale capability low-resolution images. Secondly, design a structure extract detailed from target images through enhanced representation learning, thereby generating features with stronger discriminative power. Finally, hierarchical frequency attention mechanism achieve localized of contextual awareness, effectively capturing high-frequency local targets. Experimental results demonstrate that Transformer achieves superior performance, validating effectiveness our modifications original RTDETR model.

Language: Английский

Citations

0

YOLO-TSR: A Novel YOLOv8-Based Network for Robust Traffic Sign Recognition DOI
Wajdi Farhat, Olfa Ben Rhaiem, Hassene Faiedh

et al.

Transportation Research Record Journal of the Transportation Research Board, Journal Year: 2025, Volume and Issue: unknown

Published: April 24, 2025

Self-driving cars have recently gained in popularity. This is because of rapid advances vehicle and artificial intelligence technology. Autonomous cars’ ability to drive effectively safely depends heavily on their capacity recognize traffic signs. Traditional visual recognition things, conversely, relies the extraction features, such as color edge. Despite these efforts, varying appearance road signs across geographical areas, lighting changes, complex background situations continues prevent development accurate sign platforms. In this paper, we present YOLO-TSR, a novel network based YOLOv8 that innovatively tackles challenges encountered (TSR). Our intention provide method detect under weather conditions. The proposed was validated against three separate datasets: our privately curated dataset, widely recognized German Traffic Sign Recognition Benchmark (GTSRB) Belgium Dataset. We conducted numerous experiments validate algorithm’s effectiveness. algorithm achieves 98.79% accuracy, 92.18% recall, 96.21% [email protected], 84.32% [email protected]:0.95 for GTSRB dataset. For private had an accuracy 96.62%, recall 90.81%, [email protected] 94.83%, 81.70%. Furthermore, maintains consistent frame rate 73 frames per second, which meets real-time detection requirements.

Language: Английский

Citations

0