A Systematic Review of Deep Learning for Intelligent Transportation Systems with Analysis and Perspectives DOI Open Access
Aria Hendrawan, Rahmat Gernowo, Oky Dwi Nurhayati

et al.

JURNAL INFOTEL, Journal Year: 2024, Volume and Issue: 16(2), P. 369 - 397

Published: May 21, 2024

This study presents a systematic review of deep learning for intelligent transportation systems. Statistics are used to find the most cited articles, and number articles quotes productive influential authors, institutions, countries or regions. Key topics patterns change discovered using authors’ keywords, common issues themes revealed flow maps showing corresponding trends. A co-occurrence keyword network is also developed present research landscape hotspots in field. The results explain how publications have changed over past seven years. Researchers can use this deeper understanding current state future trends role

Language: Английский

Enhancing Emergency Vehicle Detection: A Deep Learning Approach with Multimodal Fusion DOI Creative Commons

Muhammad Zohaib,

Muhammad Asim, Mohammed ElAffendi

et al.

Mathematics, Journal Year: 2024, Volume and Issue: 12(10), P. 1514 - 1514

Published: May 13, 2024

Emergency vehicle detection plays a critical role in ensuring timely responses and reducing accidents modern urban environments. However, traditional methods that rely solely on visual cues face challenges, particularly adverse conditions. The objective of this research is to enhance emergency by leveraging the synergies between acoustic information. By incorporating advanced deep learning techniques for both data, our aim significantly improve accuracy response times. To achieve goal, we developed an attention-based temporal spectrum network (ATSN) with attention mechanism specifically designed ambulance siren sound detection. In parallel, enhanced tasks implementing Multi-Level Spatial Fusion YOLO (MLSF-YOLO) architecture. combine information effectively, employed stacking ensemble technique, creating robust framework This approach capitalizes strengths modalities, allowing comprehensive analysis surpasses existing methods. Through research, achieved remarkable results, including misdetection rate only 3.81% 96.19% when applied data containing vehicles. These findings represent significant progress real-world applications, demonstrating effectiveness improving systems.

Language: Английский

Citations

11

Automated Region of Interest-Based Data Augmentation for Fallen Person Detection in Off-Road Autonomous Agricultural Vehicles DOI Creative Commons

Hwapyeong Baek,

Seunghyun Yu,

Seungwook Son

et al.

Sensors, Journal Year: 2024, Volume and Issue: 24(7), P. 2371 - 2371

Published: April 8, 2024

Due to the global population increase and recovery of agricultural demand after COVID-19 pandemic, importance automation autonomous vehicles is growing. Fallen person detection critical preventing fatal accidents during vehicle operations. However, there a challenge due relatively limited dataset for fallen persons in off-road environments compared on-road pedestrian datasets. To enhance generalization performance using object technology, data augmentation necessary. This paper proposes technique called Automated Region Interest Copy-Paste (ARCP) address issue scarcity. The involves copying real objects obtained from public source datasets then pasting onto background dataset. Segmentation annotations these are generated YOLOv8x-seg Grounded-Segment-Anything, respectively. proposed algorithm applied automatically produce augmented based on segmentation annotations. encompasses annotation generation, Intersection over Union-based segment setting, configuration. When ARCP applied, significant improvements accuracy observed two state-of-the-art detectors: anchor-based YOLOv7x anchor-free YOLOv8x, showing an 17.8% (from 77.8% 95.6%) 12.4% 83.8% 96.2%), suggests high applicability addressing challenges expected have impact advancement technology industry.

Language: Английский

Citations

4

Fine-grained vehicle recognition under low light conditions using EfficientNet and image enhancement on LiDAR point cloud data DOI Creative Commons

Guanqiang Ruan,

Tao Hu, Caichang Ding

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: Feb. 8, 2025

The detection and recognition of vehicles are crucial components environmental perception in autonomous driving. Commonly used sensors include cameras LiDAR. performance camera-based data collection is susceptible to interference, whereas LiDAR, while unaffected by lighting conditions, can only achieve coarse-grained vehicle classification. This study introduces a novel method for fine-grained model using LiDAR low-light conditions. approach involves collecting with performing projection transformation, enhancing the contrast limited adaptive histogram equalization combined Gamma correction, implementing based on EfficientNet. Experimental results demonstrate that proposed achieves an accuracy 98.88% F1-score 98.86%, showcasing excellent performance.

Language: Английский

Citations

0

Advancing vehicle detection for autonomous driving: integrating computer vision and machine learning techniques for real-world deployment DOI
Wael Farag, M. Fatouh

Journal of Control and Decision, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 18

Published: Feb. 25, 2025

Road-object detection and recognition are crucial for self-driving vehicles to achieve autonomy. Detecting tracking other is a key task, but deep-learning methods, while effective, demand high computational power expensive hardware. This paper proposes lightweight vehicle technique (LWVDT) designed low-cost CPUs without compromising robustness, speed, or accuracy. Suitable advanced driving assistance systems (ADAS) autonomous subsystems, LWVDT combines computer vision techniques like color spatial feature extraction Histogram of Oriented Gradients (HOG) with machine learning methods such as support vector machines (SVM) optimize performance. The algorithm processes raw RGB images generate boundary boxes tracks them across frames. Evaluated using real-road images, videos, the KITTI database under various conditions, achieves up 87% accuracy, demonstrating its effectiveness in diverse environments.

Language: Английский

Citations

0

Explainable AI and monocular vision for enhanced UAV navigation in smart cities: prospects and challenges DOI Creative Commons

Shumaila Javaid,

Muhammad Asghar Khan, Hamza Fahim

et al.

Frontiers in Sustainable Cities, Journal Year: 2025, Volume and Issue: 7

Published: March 14, 2025

Explainable Artificial Intelligence (XAI) is increasingly pivotal in Unmanned Aerial Vehicle (UAV) operations within smart cities, enhancing trust and transparency AI-driven systems by addressing the 'black-box' limitations of traditional Machine Learning (ML) models. This paper provides a comprehensive overview evolution UAV navigation control systems, tracing transition from conventional methods such as GPS inertial to advanced AI- ML-driven approaches. It investigates transformative role XAI particularly safety-critical applications where interpretability essential. A key focus this study integration into monocular vision-based frameworks, which, despite their cost-effectiveness lightweight design, face challenges depth perception ambiguities limited fields view. Embedding techniques enhances reliability these providing clearer insights paths, obstacle detection, avoidance strategies. advancement crucial for adaptability dynamic urban environments, including infrastructure changes, traffic congestion, environmental monitoring. Furthermore, work examines how frameworks foster decision-making high-stakes planning disaster response. explores critical challenges, scalability, evolving conditions, balancing explainability with performance, ensuring robustness adverse environments. Additionally, it highlights emerging potential integrating vision models Large Language Models (LLMs) further enhance situational awareness autonomous decision-making. Accordingly, actionable advance next-generation technologies, transparency. The findings underscore XAI's bridging existing research gaps accelerating deployment intelligent, explainable future cities.

Language: Английский

Citations

0

Sustainable AI for plant disease classification using ResNet18 in few-shot learning DOI Creative Commons

Fareeha Naveed,

Adven Masih, Jabar Mahmood

et al.

Array, Journal Year: 2025, Volume and Issue: unknown, P. 100395 - 100395

Published: April 1, 2025

Language: Английский

Citations

0

Interactive Deep Learning System for Automated Car Damage Detection: Multi-Model Evaluation and Interactive Web Deployment DOI

S. Madhu,

Bharathi Maddikatla,

Ranjitha Padakanti

et al.

Published: May 8, 2025

This project presents an automated framework for vehicle damage evaluation employing deep learning methodologies, designed to optimize assessment procedures within automotive service environments. By implementing the YOLOv9 computational vision architecture, system enables rapid identification of vehicular components through advanced pattern recognition, reducing reliance on labor-intensive manual inspections. The model underwent training extensive curated dataset comprising 8,450 annotated images capturing diverse morphologies across multiple perspectives, including frontal collisions, lateral impacts, and rear-end accidents. integrates physics-informed augmentation strategies enhance environmental adaptability, particularly addressing challenges posed by variable lighting conditions reflective surfaces. A modular processing pipeline facilitates scalable deployment quantization techniques optimized edge computing devices, demonstrating practical applicability in center operations. incorporates a web-based interface enabling real-time visualization report generation, significantly streamlining technician workflows. Experimental results indicate substantial improvements inspection efficiency, with architecture achieving 87% mean average precision ([email protected]) while maintaining efficiency. Quantized variants exhibited 68% reduction memory footprint minimal accuracy degradation. Field validations conducted centers confirmed system's operational effectiveness, highlighting strong correlations between complexity, duration, detection capabilities. research establishes foundational insights future advancements 3D reconstruction adaptive systems diagnostics.

Language: Английский

Citations

0

Vehicle and Plate Detection for Intelligent Transport Systems: Performance Evaluation of Models YOLOv5 and YOLOv8 DOI

Matheus H. F. Afonso,

Eduardo Henrique Teixeira, Mateus R. da Cruz

et al.

Published: Oct. 9, 2023

Intelligent transport systems aim to enhance efficiency and safety in urban mobility, employing technologies like computer vision detect vehicles license plates images footage. Regression-based algorithms such as you only look once (YOLO) can be applied this context. Hence, work assesses the performance of YOLOv5 YOLOv8 models automatically detecting vehicle plates. The training validation processes involved a curated dataset obtained through transfer learning techniques quality quantity images, encompassing various locations lighting conditions ensure data diversity representativeness. Confusion matrix analysis revealed that model slightly outperformed YOLOv5, with an accuracy around 97.98% precision rating 97.19%. In addition, time for was lower than based on

Language: Английский

Citations

9

Vehicle and Pedestrian Detection Based on Improved YOLOv7-Tiny DOI Open Access
Zhen Liang, Wei Wang,

Ruifeng Meng

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(20), P. 4010 - 4010

Published: Oct. 12, 2024

To improve the detection accuracy of vehicles and pedestrians in traffic scenes using object algorithms, this paper presents modifications, compression, deployment single-stage typical algorithm YOLOv7-tiny. In model improvement section: firstly, to address problem small missed detection, shallower feature layer information is incorporated into original fusion branch, forming a four-scale head; secondly, Multi-Stage Feature Fusion (MSFF) module proposed fully integrate shallow, middle, deep extract more comprehensive information. compression Layer-Adaptive Magnitude-based Pruning (LAMP) Torch-Pruning library are combined, setting different pruning rates for improved model. V7-tiny-P2-MSFF model, pruned by 45% LAMP, deployed on embedded platform NVIDIA Jetson AGX Xavier. Experimental results show that achieves 12.3% increase [email protected] compared with parameter volume, computation size reduced 76.74%, 7.57%, 70.94%, respectively. Moreover, inference speed single image quantized Xavier 9.5 ms.

Language: Английский

Citations

3

Vehicle Detection Using Haar Features for Effective Traffic Management Using Machine Learning DOI
Jyoti Kukade,

Megha Patidar,

Rahul Singh Pawar

et al.

Learning and analytics in intelligent systems, Journal Year: 2025, Volume and Issue: unknown, P. 383 - 392

Published: Jan. 1, 2025

Language: Английский

Citations

0