Advancing Object Discovery: Unveiling the Power of YOLO in Computer Vision Applications DOI

A. Vijayalakshmi,

B. Ebenezer Abishek,

C. Arul Stephen

et al.

Lecture notes in networks and systems, Journal Year: 2025, Volume and Issue: unknown, P. 267 - 277

Published: Jan. 1, 2025

Language: Английский

Smart solutions for capsicum Harvesting: Unleashing the power of YOLO for Detection, Segmentation, growth stage Classification, Counting, and real-time mobile identification DOI
Ayan Paul, Rajendra Machavaram,

Ambuj

et al.

Computers and Electronics in Agriculture, Journal Year: 2024, Volume and Issue: 219, P. 108832 - 108832

Published: March 15, 2024

Language: Английский

Citations

26

An Interpretable Approach with Explainable AI for Heart Stroke Prediction DOI Creative Commons
Parvathaneni Naga Srinivasu, Uddagiri Sirisha,

Kotte Sandeep

et al.

Diagnostics, Journal Year: 2024, Volume and Issue: 14(2), P. 128 - 128

Published: Jan. 5, 2024

Heart strokes are a significant global health concern, profoundly affecting the wellbeing of population. Many research endeavors have focused on developing predictive models for heart using ML and DL techniques. Nevertheless, prior studies often failed to bridge gap between complex their interpretability in clinical contexts, leaving healthcare professionals hesitant embrace them critical decision-making. This introduces meticulously designed, effective, easily interpretable approach stroke prediction, empowered by explainable AI Our contributions include designed model, incorporating pivotal techniques such as resampling, data leakage prevention, feature selection, emphasizing model’s comprehensibility practitioners. multifaceted holds potential significantly impact field offering reliable understandable tool prediction. In our research, we harnessed Stroke Prediction Dataset, valuable resource containing 11 distinct attributes. Applying these techniques, including model measures permutation importance explainability methods like LIME, has achieved impressive results. While provides insights into globally, LIME complements this local instance-specific explanations. Together, they contribute comprehensive understanding Artificial Neural Network (ANN) model. The combination not only aids features that drive overall performance but also helps interpreting validating individual predictions. ANN an outstanding accuracy rate 95%.

Language: Английский

Citations

22

A Comprehensive Survey of Machine Learning Techniques and Models for Object Detection DOI Creative Commons
Μαρία Τρίγκα, Ηλίας Δρίτσας

Sensors, Journal Year: 2025, Volume and Issue: 25(1), P. 214 - 214

Published: Jan. 2, 2025

Object detection is a pivotal research domain within computer vision, with applications spanning from autonomous vehicles to medical diagnostics. This comprehensive survey presents an in-depth analysis of the evolution and significant advancements in object detection, emphasizing critical role machine learning (ML) deep (DL) techniques. We explore wide spectrum methodologies, ranging traditional approaches latest DL models, thoroughly evaluating their performance, strengths, limitations. Additionally, delves into various metrics for assessing model effectiveness, including precision, recall, intersection over union (IoU), while addressing ongoing challenges field, such as managing occlusions, varying scales, improving real-time processing capabilities. Furthermore, we critically examine recent breakthroughs, advanced architectures like Transformers, discuss future directions aimed at overcoming existing barriers. By synthesizing current advancements, this provides valuable insights enhancing robustness, accuracy, efficiency systems across diverse challenging applications.

Language: Английский

Citations

5

Enhanced-YOLOv8: A new small target detection model DOI
Lai Wei,

Tong Yifei

Digital Signal Processing, Journal Year: 2024, Volume and Issue: 153, P. 104611 - 104611

Published: May 31, 2024

Language: Английский

Citations

12

LUD-YOLO: A novel lightweight object detection network for unmanned aerial vehicle DOI Creative Commons

Qingsong Fan,

Yiting Li, Muhammet Deveci

et al.

Information Sciences, Journal Year: 2024, Volume and Issue: 686, P. 121366 - 121366

Published: Aug. 16, 2024

Language: Английский

Citations

11

Individual Tree Species Identification for Complex Coniferous and Broad-Leaved Mixed Forests Based on Deep Learning Combined with UAV LiDAR Data and RGB Images DOI Open Access
Hao Zhong, Zheyu Zhang, Haoran Liu

et al.

Forests, Journal Year: 2024, Volume and Issue: 15(2), P. 293 - 293

Published: Feb. 3, 2024

Automatic and accurate individual tree species identification is essential for the realization of smart forestry. Although existing studies have used unmanned aerial vehicle (UAV) remote sensing data identification, effects different spatial resolutions combining multi-source automatic using deep learning methods still require further exploration, especially in complex forest conditions. Therefore, this study proposed an improved YOLOv8 model multisource under stand Firstly, RGB LiDAR natural coniferous broad-leaved mixed forests conditions Northeast China were acquired via a UAV. Then, resolutions, scales, band combinations explored, based on identification. Subsequently, Attention Multi-level Fusion (AMF) Gather-and-Distribute (GD) was proposed, according to characteristics data, which two branches AMF Net backbone able extract fuse features from sources separately. Meanwhile, GD mechanism introduced into neck model, order fully utilize extracted main trunk complete eight area. The results showed that YOLOv8x images combined with current mainstream object detection algorithms achieved highest mAP 75.3%. When resolution within 8 cm, accuracy exhibited only slight variation. However, decreased significantly decrease when greater than 15 cm. scales x, l, m could exhibit higher compared other scales. DGB PCA-D superior 75.5% 76.2%, respectively. had more significant improvement single 81.0%. clarified impact demonstrated excellent performance provides new solution technical reference forestry resource investigation data.

Language: Английский

Citations

9

Improved YOLOv7 Algorithm for Small Object Detection in Unmanned Aerial Vehicle Image Scenarios DOI Creative Commons
Xinmin Li,

Yingkun Wei,

Jiahui Li

et al.

Applied Sciences, Journal Year: 2024, Volume and Issue: 14(4), P. 1664 - 1664

Published: Feb. 19, 2024

Object detection in unmanned aerial vehicle (UAV) images has become a popular research topic recent years. However, UAV are captured from high altitudes with large proportion of small objects and dense object regions, posing significant challenge to detection. To solve this issue, we propose an efficient YOLOv7-UAV algorithm which low-level prediction head (P2) is added detect the shallow feature map, deep-level (P5) removed reduce effect excessive down-sampling. Furthermore, modify bidirectional pyramid network (BiFPN) structure weighted cross-level connection enhance fusion effectiveness multi-scale maps images. mitigate mismatch between box ground-truth box, SCYLLA-IoU (SIoU) function employed regression loss accelerate training convergence process. Moreover, proposed been quantified compiled Vitis-AI development environment validated terms power consumption hardware resources on FPGA platform. The experiments show that resource reduced by 28%, mAP improved 3.9% compared YOLOv7, implementation improves energy efficiency 12 times GPU.

Language: Английский

Citations

9

Synchronizing Object Detection: Applications, Advancements and Existing Challenges DOI Creative Commons
Md. Tanzib Hosain,

Asif Zaman,

Mushfiqur Rahman Abir

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 54129 - 54167

Published: Jan. 1, 2024

From pivotal roles in autonomous vehicles, healthcare diagnostics, and surveillance systems to seamlessly integrating with augmented reality, object detection algorithms stand as the cornerstone unraveling complexities of visual world. Tracing trajectory from conventional region-based methods latest neural network architectures reveals a technological renaissance where metamorphose into digital artisans. However, this journey is not without hurdles, prompting researchers grapple real-time detection, robustness varied environments, interpretability amidst intricacies deep learning. The allure addressing issues such occlusions, scale variations, fine-grained categorization propels exploration uncharted territories, beckoning scholarly community contribute an ongoing saga innovation discovery. This research offers comprehensive panorama, encapsulating applications reshaping our advancements pushing boundaries perception, open extending invitation next generation visionaries explore frontiers within detection.

Language: Английский

Citations

9

Smart Manufacturing System Using LLM for Human-Robot Collaboration: Applications and Challenges DOI

Muhammad Younas,

Ali Abdullah,

Ghulam Muhayyu Din

et al.

European Journal of Theoretical and Applied Sciences, Journal Year: 2025, Volume and Issue: 3(1), P. 215 - 226

Published: Jan. 1, 2025

In the era of Industry 4.0, emerging technologies such as artificial intelligence (AI), big data, and internet things (IoT) are rapidly transforming upgrading manufacturing industry, with robots playing an increasingly crucial role in this process. These advancements lay foundation for high-quality development intelligent manufacturing. With introduction 5.0, human-centered approach has gained significant attention, giving rise to a new field human-centric The distinction between humans systems is becoming blurred, research on human-robot collaboration become hot topic. This paper proposes prototype method smart collaborative operation systems, based integration large language model (LLM) machine vision. By leveraging strengths commuter vision LLMs, aims enhance systems. Additionally, study disused applications challenges proposed model.

Language: Английский

Citations

1

NLDETR-YOLO: A decision-making method for apple thinning period DOI Creative Commons
Xiangyu Wang,

Tinggao Yang,

Zhenyu Chen

et al.

Scientia Horticulturae, Journal Year: 2025, Volume and Issue: 341, P. 113991 - 113991

Published: Feb. 1, 2025

Language: Английский

Citations

1