Joint Optimization Strategy of Task Migration and Power Allocation Based on Soft Actor-Critic in Unmanned Aerial Vehicle-Assisted Internet of Vehicles Environment DOI Creative Commons
Jingpan Bai,

Yifan Zhao,

Bo Yang

et al.

Drones, Journal Year: 2024, Volume and Issue: 8(11), P. 693 - 693

Published: Nov. 20, 2024

In recent years, the unmanned aerial vehicle-assisted internet of vehicles has been extensively studied to enhance communication and computation services in vehicular environments where ground infrastructures are limited or absent. However, due limited-service range battery life vehicles, along with high mobility an vehicle cannot continuously cover serve same vehicle, leading interruptions application services. Therefore, this paper proposes a joint optimization strategy for task migration power allocation based on soft actor-critic (JOTMAP-SAC). First, models, computational resource models established sequentially dynamic coordinate each node. The problem is then formulated. Considering nature environment continuity action space, algorithm designed. This iteratively finds optimal solution problem, thereby reducing processing delay ensuring processing.

Language: Английский

Task offloading and multi-cache placement in multi-access mobile edge computing DOI
Linbo Zhai, Ping Zhao, Kai Xue

et al.

Computer Networks, Journal Year: 2025, Volume and Issue: unknown, P. 111030 - 111030

Published: Jan. 1, 2025

Language: Английский

Citations

4

Optimizing Task Offloading with Metaheuristic Algorithms Across Cloud, Fog, and Edge Computing Networks: A Comprehensive Survey and State-of-the-Art Schemes DOI
Amir M. Rahmani, Amir Haider,

Parisa Khoshvaght

et al.

Sustainable Computing Informatics and Systems, Journal Year: 2025, Volume and Issue: unknown, P. 101080 - 101080

Published: Jan. 1, 2025

Language: Английский

Citations

3

Deep Reinforcement Learning-based Mining Task Offloading Scheme for Intelligent Connected Vehicles in UAV-aided MEC DOI Open Access
Chunlin Li, Kun Jiang, Yong Zhang

et al.

ACM Transactions on Design Automation of Electronic Systems, Journal Year: 2024, Volume and Issue: 29(3), P. 1 - 29

Published: March 20, 2024

The convergence of unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) networks and blockchain transforms the existing networking paradigm. However, in temporary hotspot scenario for intelligent connected vehicles (ICVs) UAV-aided MEC networks, deploying blockchain-based services applications is generally impossible due to its high computational resource storage requirements. One possible solution offload part all tasks servers wherever possible. Unfortunately, limited availability mobility vehicles, there still lacking simple solutions that can support low-latency higher reliability ICVs. In this article, we study task offloading problem minimizing total system latency optimal scheme, subject constraints on hover position coordinates UAV, fixed bonuses, flexible transaction fees, rates, mining difficulty, costs battery energy consumption UAV. confirmed be a challenging linear integer planning problem, formulate as constrained Markov decision process. Deep Reinforcement Learning (DRL) has excellently solved sequential decision-making problems dynamic ICVs environment, therefore, propose novel distributed DRL-based P-D3QN approach by using Prioritized Experience Replay strategy dueling double deep Q-network (D3QN) algorithm solve policy effectively. Finally, experiment results show compared with benchmark bring about 26.24% improvement increase 42.26% utility.

Language: Английский

Citations

10

Federated learning based on Stackelberg game in unmanned-aerial-vehicle-enabled mobile edge computing DOI
Chunlin Li, Mingyang Song, Youlong Luo

et al.

Expert Systems with Applications, Journal Year: 2023, Volume and Issue: 235, P. 121023 - 121023

Published: Aug. 1, 2023

Language: Английский

Citations

17

Deep Reinforcement Learning-based computation offloading and distributed edge service caching for Mobile Edge Computing DOI

Mande Xie,

J.D. Ye,

Guoping Zhang

et al.

Computer Networks, Journal Year: 2024, Volume and Issue: 250, P. 110564 - 110564

Published: June 5, 2024

Language: Английский

Citations

5

Optimizing Task Offloading in MIMO-Enabled Vehicular Networks through Deep Reinforcement Learning DOI

Jian Xu,

Shengchao Su

Vehicular Communications, Journal Year: 2025, Volume and Issue: unknown, P. 100901 - 100901

Published: Feb. 1, 2025

Language: Английский

Citations

0

MEC Computation Offloading Decision Based on ARN-PPO-D DOI
Xuming Mao, Xinyu Zhang, Xiaoyu Wang

et al.

Smart innovation, systems and technologies, Journal Year: 2025, Volume and Issue: unknown, P. 49 - 62

Published: Jan. 1, 2025

Language: Английский

Citations

0

Deep Reinforcement Learning and SQP-driven task offloading decisions in vehicular edge computing networks DOI
Ehzaz Mustafa, Junaid Shuja, Faisal Rehman

et al.

Computer Networks, Journal Year: 2025, Volume and Issue: unknown, P. 111180 - 111180

Published: March 1, 2025

Language: Английский

Citations

0

Differential Evolution Deep Reinforcement Learning Algorithm for Dynamic Multiship Collision Avoidance with COLREGs Compliance DOI Creative Commons
Yang Shen, Zuowen Liao, Dan Chen

et al.

Journal of Marine Science and Engineering, Journal Year: 2025, Volume and Issue: 13(3), P. 596 - 596

Published: March 17, 2025

In ship navigation, determining a safe and economic path from start to destination under dynamic complex environment is essential, but the traditional algorithms of current research are inefficient. Therefore, novel differential evolution deep reinforcement learning algorithm (DEDRL) proposed address problems, which composed local planning global planning. The Deep Q-Network utilized search best in target multiple-obstacles scenarios. Furthermore, course-punishing reward mechanism introduced optimize constrain detected length as short possible. Quaternion domain COLREGs involved construct collision risk detection model. Compared with other algorithms, experimental results demonstrate that DEDRL achieved 28.4539 n miles, also performed all scenarios Overall, reliable robust for it provides an efficient solution avoidance.

Language: Английский

Citations

0

Towards Efficient Task Offloading with Dependency Guarantees in Vehicular Edge Networks through Distributed Deep Reinforcement Learning DOI
Haoqiang Liu, Wenzhen Huang, Dong In Kim

et al.

IEEE Transactions on Vehicular Technology, Journal Year: 2024, Volume and Issue: 73(9), P. 13665 - 13681

Published: April 11, 2024

The proliferation of computation-intensive and delay-sensitive applications in the Internet Vehicles (IoV) poses great challenges to resource-constrained vehicles. To tackle this issue, Mobile Edge Computing (MEC) enabling offloading on-vehicle tasks edge servers has emerged as a promising approach. MEC jointly augments network computing capabilities alleviates resource utilization for IoV, garnering substantial attention. Nevertheless, efficacy depends heavily on adopted scheme, especially presence complex subtask dependencies. Existing research largely overlooked crucial dependencies among subtasks, which significantly influence decision making offloading. This work attempts schedule subtasks with guaranteed while minimizing system latency energy costs multi-vehicle scenarios. Firstly, we introduce priority scheduling method basis Directed Acyclic Graph (DAG) topological structure ensure order scenarios interdependencies. Secondly, light privacy concerns limited information sharing, propose an Optimized Distributed Computation Offloading (ODCO) scheme based deep reinforcement learning (DRL), alleviating conventional requirement extensive vehicle-specific sharing achieve optimal performance. adaptive $k$ -step approach is further presented enhance robustness training process. Numerical experiments are demonstrate advantages proposed regarding reduction cost and, more importantly, convergence rate comparison existing state-of-the-art schemes. For instance, ODCO achieved utility approximately 0.80 within 300 episodes, obtaining gains about 0.05 compared distributed earliest-finish time (DEFO) algorithm around 500 episodes.

Language: Английский

Citations

3