Knowledge Graph of Unmanned Aerial Vehicle Reconnaissance Research DOI

Xiaodong Zhang,

Xi Qin,

JG Cai

et al.

Published: Dec. 22, 2024

Language: Английский

Multi-UAV path planning for multiple emergency payloads delivery in natural disaster scenarios DOI Creative Commons

Z. A. Kutpanova,

Mustafa Raad Kadhim, Xu Zheng

et al.

Journal of Electronic Science and Technology, Journal Year: 2025, Volume and Issue: unknown, P. 100303 - 100303

Published: Feb. 1, 2025

Language: Английский

Citations

1

Deep Reinforcement Learning for UAV Target Search and Continuous Tracking in Complex Environments with Gaussian Process Regression and Prior Policy Embedding DOI Open Access
Zhihui Feng, Xitai Na, Hai Su

et al.

Electronics, Journal Year: 2025, Volume and Issue: 14(7), P. 1330 - 1330

Published: March 27, 2025

In recent years, unmanned aerial vehicles (UAVs) have shown substantial application value in continuous target tracking tasks complex environments. Due to the target’s movement behavior and complexities of surrounding environment, UAV is prone losing track target. To tackle this issue, paper presents a reinforcement learning (RL) approach that combines search tracking. During phase, spatial information entropy employed guide avoiding redundant searches, thus enhancing acquisition efficiency. event loss, Gaussian process regression (GPR) predict trajectory, thereby reducing time needed for re-localization. addition, address sample efficiency limitations conventional RL, Kolmogorov–Arnold networks-based deep deterministic policy gradient (KbDDPG) algorithm with prior embedding proposed controller training.Simulation results demonstrate method outperforms traditional methods within It improves UAV’s ability re-locate after loss. The KbDDPG efficiently leverages policy, leading accelerated convergence enhanced performance.

Language: Английский

Citations

0

A New Hybrid Reinforcement Learning with Artificial Potential Field Method for UAV Target Search DOI Creative Commons
Jin Fang, Zhihao Ye,

Mengxue Li

et al.

Sensors, Journal Year: 2025, Volume and Issue: 25(9), P. 2796 - 2796

Published: April 29, 2025

Autonomous navigation and target search for unmanned aerial vehicles (UAVs) have extensive application potential in rescue, surveillance, environmental monitoring. Reinforcement learning (RL) has demonstrated excellent performance real-time UAV through dynamic optimization of decision-making strategies, but its large-scale environments obstacle avoidance is still limited by slow convergence low computational efficiency. To address this issue, a hybrid framework combining RL artificial field (APF) proposed to improve the algorithm. Firstly, task scenario training environment are constructed. Secondly, integrated with APF form that combines global local strategies. Thirdly, compared standalone algorithms analysis their differences. The experimental results demonstrate method significantly outperforms terms efficiency performance. Specifically, SAC-APF achieves 161% improvement success rate baseline SAC model, increasing from 0.282 0.736 scenarios.

Language: Английский

Citations

0

Knowledge Graph of Unmanned Aerial Vehicle Reconnaissance Research DOI

Xiaodong Zhang,

Xi Qin,

JG Cai

et al.

Published: Dec. 22, 2024

Language: Английский

Citations

0