Improving energy efficiency and fault tolerance of mission-critical cloud task scheduling: A mixed-integer linear programming approach DOI

Mohammadreza Saberikia,

Hamed Farbeh, Mahdi Fazeli

et al.

Sustainable Computing Informatics and Systems, Journal Year: 2024, Volume and Issue: unknown, P. 101068 - 101068

Published: Nov. 1, 2024

Language: Английский

Optimizing Task Scheduling and Resource Utilization in Cloud Environment: A Novel Approach Combining Pattern Search With Artificial Rabbit Optimization DOI Creative Commons

Santosh Kumar Paul,

Sunil Kumar Dhal, Santosh Kumar Majhi

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 67130 - 67148

Published: Jan. 1, 2024

The increasing demand for Cloud service with sudden resource requirements of Virtual Machines (VMs) different types and sizes may create an unbalanced state in the datacenters. In turn, it will lead to low utilization slow down server's performance. This research article proposes enhanced version Artificial Rabbit Optimization (ARO) called Improved based on Pattern Search (IARO-PS), where ARO has been utilized schedule dynamically independent requests (tasks) overcoming challenges discussed above a (PS) method hybridized address shortcomings provide better exploration-exploitation balance. initial step proposed approach is employ load balancing strategy by dividing workloads (user requests) across available VMs. next utilizes IARO-PS map onto optimal VMs scheduling process carry out diverse resources. A standard benchmark function (CEC2017) used assess technique's efficacy. comprehensive evaluation carried taking real-world dataset having specifications tasks CloudSim evaluate performance methodology. Additionally, simulation-based comparison various metaheuristic-based workload methods like Genetic Algorithm (GA), Bird Swarm (BSO), Modified Particle Q-learning (QMPSO), Multi-Objectives Grey Wolf Optimizer (MGWO). Based simulations, algorithm performed than previously mentioned algorithms, reducing makespan 10.45% 2.31% 4.35% (MGWO), 15.35% 4.17% 1.03% 1.44% 7.33% both homogeneous heterogeneous surroundings, respectively, improving 36.74% 14.31% 19.75% 45.23% (BSO) 12.17% 6.02% 9.10% 19.39% (BSO). Furthermore, statistical through Friedman's test Holm's also showcasing decrease increase VM utilization, which are outcomes simulated experimental study.

Language: Английский

Citations

5

Sustainable Cost-Energy Aware Load Balancing in Cloud Environment Using Intelligent Optimization DOI
Garima Verma

Sustainable Computing Informatics and Systems, Journal Year: 2025, Volume and Issue: unknown, P. 101115 - 101115

Published: March 1, 2025

Language: Английский

Citations

0

Next-Gen Cloud Efficiency: Fault-Tolerant Task Scheduling With Neighboring Reservations for Improved Resource Utilization DOI Creative Commons
Sheikh Umar Mushtaq, Sophiya Sheikh,

Sheikh Mohammad Idrees

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 75920 - 75940

Published: Jan. 1, 2024

One of the main goals in any computational system like cloud is to effectively allocate resources proficiently for task scheduling. However, dynamic characteristics make it more prone faults and failures. The flexible responsive changes are made redistribute virtual machines (VMs) address these failures maintaining continuous services. may inadvertently lead uneven load distribution. Therefore, thorough attention required ensure carefully monitored equilibrium following fault tolerance. Addressing all issues simultaneously with optimized Quality Service (QoS) parameters a good need time. In this paper, novel hybrid model: Hybrid Fault-tolerant Scheduling Load balancing Model (HFSLM) has been proposed optimize makespan dynamically arriving tasks efficiently utilize available VMs. Moreover, model also provides solutions several crucial concerns systems including VM failure, VM/task heterogeneity. consequence approach offers Neighbouring as substitute corresponding complete its execution. Furthermore, escorted by load-balancing algorithm maintain distribution after handling further optimization considered QoS parameters. HFSLM evaluated comparing FTHRM, MAX-MIN, MIN-MIN, OLB on small scale over diverse machine heterogeneities ELISA MELISA an extremely large scale. evaluation results show that recommended tops compared approaches cases heterogeneities.

Language: Английский

Citations

3

DE-RALBA: dynamic enhanced resource aware load balancing algorithm for cloud computing DOI Creative Commons
Altaf Hussain, Muhammad Aleem, Atiq Ur Rehman

et al.

PeerJ Computer Science, Journal Year: 2025, Volume and Issue: 11, P. e2739 - e2739

Published: March 18, 2025

Cloud computing provides an opportunity to gain access the large-scale and high-speed resources without establishing your own infrastructure for executing high-performance (HPC) applications. has ( i.e ., computation power, storage, operating system, network, database etc .) as a public utility services end users on pay-as-you-go model. From past several years, efficient utilization of compute cloud become prime interest scientific community. One key reasons behind inefficient resource is imbalance distribution workload while HPC applications in heterogenous environment. The static scheduling technique usually produces lower higher makespan, dynamic achieves better load-balancing by incorporating pool. techniques lead increased overhead requiring continuous system monitoring, job requirement assessments real-time allocation decisions. This additional load potential impact performance responsiveness system. In this article, enhanced resource-aware balancing algorithm (DE-RALBA) proposed mitigate load-imbalance considering capabilities all VMs computing. empirical are performed CloudSim simulator using instances two benchmark datasets heterogeneous problems (HCSP) Google Jobs (GoCJ) dataset). obtained results revealed that DE-RALBA mitigates significant improvement terms makespan against existing algorithms, namely PSSLB, PSSELB, Dynamic MaxMin, DRALBA. Using HCSP instances, up 52.35% improved compared technique, more superior achieved GoCJ dataset.

Language: Английский

Citations

0

Secure data transmission in cloud computing using a cyber-security trust model with multi-risk protection scheme in smart IOT application DOI

Torana Kamble,

Madhuri Ghuge,

Ritu Jain

et al.

Cluster Computing, Journal Year: 2024, Volume and Issue: 28(2)

Published: Nov. 26, 2024

Language: Английский

Citations

1

An enhanced round robin using dynamic time quantum for real-time asymmetric burst length processes in cloud computing environment DOI Creative Commons
Most. Fatematuz Zohora, Fahiba Farhin, M. Shamim Kaiser

et al.

PLoS ONE, Journal Year: 2024, Volume and Issue: 19(8), P. e0304517 - e0304517

Published: Aug. 15, 2024

Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests resource-sharing facilities require proper task scheduling among available virtual machines, which significant issue plays crucial role developing an optimal cloud environment. Round Robin prevalent algorithm for fair distribution resources with balanced contribution minimized response time turnaround time. This paper introduced new enhanced round-robin approach systems. proposed generates keeps updating quantum process execution, considering number system their burst length. Since our method dynamically runs processes, it appropriate real-time environment like computing. notable part this capability tasks asymmetric time, avoiding convoy effect. experimental result indicates has outperformed existing improved approaches terms average waiting context switches. Comparing against five other round robin approaches, reduced times by 15.77% switching 20.68% on average. After executing experiment comparative study, can be concluded optimal, acceptable, relatively better suited environments.

Language: Английский

Citations

1

Energy Efficient Real-Time Tasks Scheduling on High-Performance Edge-Computing Systems Using Genetic Algorithm DOI Creative Commons
Hameed Hussain, Muhammad Zakarya,

Ahmad Ali

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 54879 - 54892

Published: Jan. 1, 2024

With an increase in the number of processing cores or systems, high-performance edge-computing system's power consumption along with its computational speed will increase, essentially. However, this comes at expense high-energy utilization. One notable solution to reduce energy these systems is execute slowest feasible so that job's deadline times are met. Unfortunately, method more response time and performance loss. To resolve issue, paper, we propose a scheduling approach associates genetic algorithm (GA) first (FiFeS) technique i.e. GA-FiFeS algorithm. This does not jeopardize real-time tasks' deadlines. The proposes energy-efficient schedule while still ensuring high times. results proposed approach, using plausible assumptions experimental parameters, compared currently in-practice approaches, FiFeS LeFeS (least speed) approaches. Using numerical simulations assumptions, our investigation suggests outperforms terms (~18.56%) (~2.78%). Furthermore, has comparable outcomes taking expected execution as assessment feature for analysis.

Language: Английский

Citations

0

Optimizing Cloud Performance: A Microservice Scheduling Strategy for Enhanced Fault-Tolerance, Reduced Network Traffic, and Lower Latency DOI Creative Commons
Abdullah Alelyani, Amitava Datta, Ghulam Mubashar Hassan

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 35135 - 35153

Published: Jan. 1, 2024

The emergence of microservice architecture has brought significant advancements in software development, offering improved scalability and availability applications. Cloud computing benefits from by mitigating the risks single failures ensuring compliance with service-level agreements. However, using presents two challenges: 1) managing network traffic, which leads to latency congestion; 2) inefficient resource allocation for microservices. Current approaches have limitations addressing these challenges. To overcome limitations, we propose a novel scheduling strategy that schedules replicas modified particle swarm optimization algorithm place them on most suitable physical machine. Additionally, balance load across machines cluster simple round-robin algorithm. Furthermore, our integrates Kubernetes tackle deployment proposed been evaluated simulating scenarios Alibaba Google datasets. experimental results demonstrate effectiveness reducing balancing load, utilizing CPU memory efficiently.

Language: Английский

Citations

0

Efficient Hybrid DDPG task scheduler for HPC and HTC in cloud environment DOI Creative Commons

Sudheer Mangalampalli,

Ganesh Reddy Karri, Sachi Nandan Mohanty

et al.

IEEE Access, Journal Year: 2024, Volume and Issue: 12, P. 108897 - 108920

Published: Jan. 1, 2024

Task Scheduling is a crucial challenge in cloud computing as diversified tasks come rapidly onto console dynamically from heterogeneous resources which consists of different task lengths, processing capacities. Generating schedules for these type Cloud Service Provider(CSP).Therefore, to generate paradigm effectively by considering arising and match it with respective Virtual Machine (VM), scheduler formulated using Deep Deterministic Policy Gradient (DDPG) algorithm used methodology design scheduler. This works three stages. In the initial stage, are classified based on length capacity identify them whether they High Performance Computing (HPC) or Throughput (HTC) tasks. After classification, second be tracked matches corresponding nature Finally, third according VM priorities calculated electricity unit cost mapped VMs. Simulations conducted Cloudsim fabricated workload distributions realtime worklogs. our proposed Hybrid scheduler(HDDPGTS) evaluated over DQN, A2C algorithms. From results, proved that HDDPGTS significantly improved makespan, Energy consumption, scheduling overhead, scalability baseline approaches.

Language: Английский

Citations

0

Designing an optimal task scheduling and VM placement in the cloud environment with multi-objective constraints using Hybrid Lemurs and Gannet Optimization Algorithm DOI
Kapil Vhatkar, Atul B. Kathole, Savita Lonare

et al.

Network Computation in Neural Systems, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 31

Published: Oct. 9, 2024

An efficient resource utilization method can greatly reduce expenses and unwanted resources. Typical cloud planning approaches lack support for the emerging paradigm regarding asset management speed optimization. The use of computing relies heavily on task allocation scheduling issue is more crucial in arranging allotting application jobs supplied by customers Virtual Machines (VM) a specific manner. needs to be specifically stated increase efficiency. environment model developed using optimization techniques. This intends optimize both VM placement over environment. In this model, new hybrid-meta-heuristic algorithm named Hybrid Lemurs-based Gannet Optimization Algorithm (HL-GOA). multi-objective function considered with constraints like cost, time, utilization, makespan, throughput. proposed further validated compared against existing methodologies. total time required 30.23%, 6.25%, 11.76%, 10.44% reduced than ESO, RSO, LO, GOA 2 VMs. simulation outcomes revealed that effectively resolved VL issues.

Language: Английский

Citations

0