Reinforcement-Learning-Based Edge Offloading Orchestration in Computing Continuum DOI Creative Commons

Iván Martínez Martín,

Gabriel Ioan Arcas, Tudor Cioara

et al.

Computers, Journal Year: 2024, Volume and Issue: 13(11), P. 295 - 295

Published: Nov. 14, 2024

The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, computational imbalances. Addressing these is essential shift the workloads from cloud edge across entire computing continuum. However, achieve this, challenges must still be addressed, particularly in decision making manage trade-offs associated with workload offloading. In this paper, we propose a task-offloading solution using Reinforcement Learning (RL) dynamically balance reduce overloads. We have chosen Deep Q-Learning algorithm adapted it our offloading problem. reward system considers node’s state type increase utilization of resources while minimizing latency bandwidth utilization. A knowledge graph model continuum infrastructure used address environment modeling facilitate RL. learning agent’s performance was evaluated different hyperparameter configurations varying episode lengths or sizes. Results show that for better experience, low, steady rate buffer size are important. Additionally, offers strong convergence features, relevant tasks node pairs identified after each episode. It also demonstrates good scalability, as number actions increases count.

Language: Английский

A federated learning model with the whale optimization algorithm for renewable energy prediction DOI Creative Commons
Viorica Rozina Chifu, Tudor Cioara,

Cristian Daniel Anitei

et al.

Computers & Electrical Engineering, Journal Year: 2025, Volume and Issue: 123, P. 110259 - 110259

Published: March 18, 2025

Language: Английский

Citations

2

Edge Computing in Healthcare: Innovations, Opportunities, and Challenges DOI Creative Commons
Alexandru Rancea, Ionuț Anghel, Tudor Cioara

et al.

Future Internet, Journal Year: 2024, Volume and Issue: 16(9), P. 329 - 329

Published: Sept. 10, 2024

Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud architectures, has attracted significant attention lately. The integration edge in modern systems takes advantage Internet Things (IoT) devices can potentially improve the systems’ performance, scalability, privacy, security applications different domains. In healthcare domain, IoT nowadays be used gather vital parameters information that fed Artificial Intelligence (AI) techniques able offer precious insights support professionals. However, issues regarding privacy security, AI optimization, computational offloading at pose challenges adoption AI. This paper aims explore current state art by using Preferred Reporting Items for Systematic Reviews Meta-Analyses (PRISMA) methodology analyzing more than 70 Web Science articles. We have defined relevant research questions, clear inclusion exclusion criteria, classified works three main directions: AI-based optimization methods, techniques. findings highlight many advantages integrating wide range use cases requiring near real-time decision-making, efficient communication links, potential transform future services eHealth applications. further is needed enforce new security-preserving methods better orchestrating coordinating load distributed decentralized scenarios.

Language: Английский

Citations

12

Mayfly algorithm with elementary functions and mathematical spirals for task scheduling in cloud computing system DOI
Xianhang Sui, Siwen Zhang, Jie-Sheng Wang

et al.

The Journal of Supercomputing, Journal Year: 2025, Volume and Issue: 81(6)

Published: April 21, 2025

Language: Английский

Citations

0

Reinforcement-Learning-Based Edge Offloading Orchestration in Computing Continuum DOI Creative Commons

Iván Martínez Martín,

Gabriel Ioan Arcas, Tudor Cioara

et al.

Computers, Journal Year: 2024, Volume and Issue: 13(11), P. 295 - 295

Published: Nov. 14, 2024

The AI-driven applications and large data generated by IoT devices connected to large-scale utility infrastructures pose significant operational challenges, including increased latency, communication overhead, computational imbalances. Addressing these is essential shift the workloads from cloud edge across entire computing continuum. However, achieve this, challenges must still be addressed, particularly in decision making manage trade-offs associated with workload offloading. In this paper, we propose a task-offloading solution using Reinforcement Learning (RL) dynamically balance reduce overloads. We have chosen Deep Q-Learning algorithm adapted it our offloading problem. reward system considers node’s state type increase utilization of resources while minimizing latency bandwidth utilization. A knowledge graph model continuum infrastructure used address environment modeling facilitate RL. learning agent’s performance was evaluated different hyperparameter configurations varying episode lengths or sizes. Results show that for better experience, low, steady rate buffer size are important. Additionally, offers strong convergence features, relevant tasks node pairs identified after each episode. It also demonstrates good scalability, as number actions increases count.

Language: Английский

Citations

0