Dependent Task Offloading and Resource Allocation via Deep Reinforcement Learning for Extended Reality in Mobile Edge Networks DOI Open Access
Xiaofan Yu, Siyuan Zhou,

Baoxiang Wei

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(13), P. 2528 - 2528

Published: June 27, 2024

Extended reality (XR) is an immersive technology widely applied in various fields. Due to the real-time interaction required between users and virtual environments, XR applications are highly sensitive latency. Furthermore, handling computationally intensive tasks on wireless devices leads energy consumption, which a critical performance constraint for applications. It has been noted that task can be decoupled several subtasks with mixed serial–parallel relationships. evaluation of application involves both subjective assessments from objective evaluations, such as consumption. Therefore, edge computing ways integrate offloading meet users’ demands complex challenging issue. To address this issue, paper constructs system based mobile (MEC) conducts research joint optimization multi-user communication channel access offloading. Specifically, we consider migration partitioned MEC servers formulate problem The maximize ratio quality experience (QoE) consumption while meeting user QoE requirements. Subsequently, introduce deep reinforcement learning-based algorithm problem. simulation results demonstrate effectiveness improving conversion efficiency, regardless partitioning strategies employed.

Language: Английский

Digital Twin-empowered intelligent computation offloading for edge computing in the era of 5G and beyond: A state-of-the-art survey DOI Creative Commons
Hoa Tran‐Dang, Dong‐Seong Kim

ICT Express, Journal Year: 2025, Volume and Issue: 11(1), P. 167 - 180

Published: Jan. 12, 2025

Language: Английский

Citations

1

Performance enhancement of artificial intelligence: A survey DOI
Moez Krichen, Mohamed S. Abdalzaher

Journal of Network and Computer Applications, Journal Year: 2024, Volume and Issue: unknown, P. 104034 - 104034

Published: Sept. 1, 2024

Language: Английский

Citations

5

Computation offloading in vehicular communications using PPO-based deep reinforcement learning DOI
Ehzaz Mustafa, Junaid Shuja, Faisal Rehman

et al.

The Journal of Supercomputing, Journal Year: 2025, Volume and Issue: 81(4)

Published: Feb. 26, 2025

Language: Английский

Citations

0

Deep Reinforcement Learning and SQP-driven task offloading decisions in vehicular edge computing networks DOI
Ehzaz Mustafa, Junaid Shuja, Faisal Rehman

et al.

Computer Networks, Journal Year: 2025, Volume and Issue: unknown, P. 111180 - 111180

Published: March 1, 2025

Language: Английский

Citations

0

Study on key technologies for air–water surface collaboration of observation unmanned aircraft vehicle DOI Creative Commons

Dongying Feng,

Jingfeng Yang,

Nanfeng Zhang

et al.

Electronics Letters, Journal Year: 2025, Volume and Issue: 61(1)

Published: Jan. 1, 2025

Abstract To address the issues of short flight duration and inability to carry high‐computation resources in small observation unmanned aerial vehicles (UAVs) due limited energy payload capacities, this paper proposes a deployment framework for an air–water surface collaborative system based on energy‐replenishment computation offloading. In framework, UAVs serve as platforms tools, while (USVs) function replenishment edge computing nodes. The nodes are capable processing, analyzing, distributing data received from UAVs. can perform coordinated landing recharging USVs using high‐precision BeiDou positioning. Experimental results indicate that application allows avoid burden carrying heavy computational loads during enables cyclic operation USV platform. findings study have broad applicability various scenarios, including environmental monitoring, disaster patrol, marine mapping, aquaculture.

Language: Английский

Citations

0

Edge assisted energy optimization for mobile AR applications for enhanced battery life and performance DOI Creative Commons
Dinesh Kumar Sahu,

Nidhi Nidhi,

Shiv Prakash

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: March 23, 2025

Abstract Mobile Augmented Reality (AR) applications have been observed to put high demands on resource-limited, portable devices, thus using up much power besides experiencing latency. Thus, overcome these challenges, the following AI-driven edge-assisted computation offloading framework that will provide optimal energy-efficiency and user experience is proposed. Our uses Reinforcement Learning/Deep Q-Networks for learning task policies based network status, battery tasks’ required processing time. Also, as a novel feature, we implement Adaptive Quality Scaling, which leaned from previous strategies managing AR rendering quality in relation available energy computing capability. This one known make interaction possible handling of call flow be efficient at same time, low consumption. Several experiments were conducted proposed results show there are an average 30% saving compared traditional heuristic-based methods offloading, success rates above 90% while latency kept below 80 ms. These support our method proves improving performance, enhancing endurance real-time experience. In addition this, system this paper reinforcement dynamically deploy enhances resource allocation smart timely. The research given here offers approach towards ensuring mobile beneficial achieving efficiency addressing needs dynamic edge computing.

Language: Английский

Citations

0

Dependent Task Offloading and Resource Allocation via Deep Reinforcement Learning for Extended Reality in Mobile Edge Networks DOI Open Access
Xiaofan Yu, Siyuan Zhou,

Baoxiang Wei

et al.

Electronics, Journal Year: 2024, Volume and Issue: 13(13), P. 2528 - 2528

Published: June 27, 2024

Extended reality (XR) is an immersive technology widely applied in various fields. Due to the real-time interaction required between users and virtual environments, XR applications are highly sensitive latency. Furthermore, handling computationally intensive tasks on wireless devices leads energy consumption, which a critical performance constraint for applications. It has been noted that task can be decoupled several subtasks with mixed serial–parallel relationships. evaluation of application involves both subjective assessments from objective evaluations, such as consumption. Therefore, edge computing ways integrate offloading meet users’ demands complex challenging issue. To address this issue, paper constructs system based mobile (MEC) conducts research joint optimization multi-user communication channel access offloading. Specifically, we consider migration partitioned MEC servers formulate problem The maximize ratio quality experience (QoE) consumption while meeting user QoE requirements. Subsequently, introduce deep reinforcement learning-based algorithm problem. simulation results demonstrate effectiveness improving conversion efficiency, regardless partitioning strategies employed.

Language: Английский

Citations

1