A Comparative Analysis of Generative Adversarial Networks for Generating Cloud Workloads DOI

Niloofar Sharifisadr,

Diwakar Krishnamurthy, Yasaman Amannejad

et al.

Published: July 7, 2024

Language: Английский

Deep neural networks in the cloud: Review, applications, challenges and research directions DOI Creative Commons
Kit Yan Chan, Bilal Abu-Salih, Raneem Qaddoura

et al.

Neurocomputing, Journal Year: 2023, Volume and Issue: 545, P. 126327 - 126327

Published: May 15, 2023

Deep neural networks (DNNs) are currently being deployed as machine learning technology in a wide range of important real-world applications. DNNs consist huge number parameters that require millions floating-point operations (FLOPs) to be executed both and prediction modes. A more effective method is implement cloud computing system equipped with centralized servers data storage sub-systems high-speed high-performance capabilities. This paper presents an up-to-date survey on current state-of-the-art for computing. Various DNN complexities associated different architectures presented discussed alongside the necessities using We also present extensive overview platforms deployment discuss them detail. Moreover, applications already systems reviewed demonstrate advantages DNNs. The emphasizes challenges deploying provides guidance enhancing new deployments.

Language: Английский

Citations

54

Advances and Predictions in Predictive Auto-Scaling and Maintenance Algorithms for Cloud Computing DOI
Anantha Raman Rathinam,

B. Santha Vathani,

A. Komathi

et al.

Published: Dec. 11, 2023

The use of sophisticated algorithms has radically altered cloud computing predictive auto-scaling and upkeep approaches. Recurrent Neural Networks (RNNs), the Prophet Algorithm, K-Means Clustering, Seasonal Autoregressive Integrated Moving-Average (SARIMA) all play a role in improving infrastructures, their interactions are studied here. By capitalizing on superiority processing sequential data, RNNs can deduce accurate workload forecasts from past patterns. Concurrently, Algorithm records seasonal annual patterns, which adds depth to forecasts. grouping servers into clusters with similar consumption Clustering improves resource allocation efficiency paves way for precise auto-scaling. SARIMA models capture nuanced fluctuations, lead reliable demand This explores state-of-the-art future directions these techniques, illuminating potential revolutionize current approaches management. When methods combined, service providers better able proactively scale resources, hence reducing likelihood bottlenecks outages. It foresees development subsequent widespread variety fields outside computing, such as Internet Things (IoT) networks edge infrastructures.

Language: Английский

Citations

32

Cloud computing load prediction by decomposition reinforced attention long short-term memory network optimized by modified particle swarm optimization algorithm DOI
Nebojša Bačanin, Vladimir Šimić, Miodrag Živković

et al.

Annals of Operations Research, Journal Year: 2023, Volume and Issue: unknown

Published: Dec. 15, 2023

Language: Английский

Citations

22

An efficient proactive VM consolidation technique with improved LSTM network in a cloud environment DOI
K Dinesh Kumar,

E. Umamaheswari

Computing, Journal Year: 2023, Volume and Issue: 106(1), P. 1 - 28

Published: Aug. 20, 2023

Language: Английский

Citations

12

EvoGWP: Predicting Long-Term Changes in Cloud Workloads Using Deep Graph-Evolution Learning DOI
Jialun Li, Jieqian Yao, Danyang Xiao

et al.

IEEE Transactions on Parallel and Distributed Systems, Journal Year: 2024, Volume and Issue: 35(3), P. 499 - 516

Published: Jan. 23, 2024

Workload prediction plays a crucial role in resource management of large scale cloud datacenters. Although quite number methods/algorithms have been proposed, long-term changes not explicitly identified and considered. Due to shifty user demands, workload re-locations, or other reasons, the ”resource usage pattern” workload, which is usually stable short-term view, may change dynamically range. Such dynamic cause significant accuracy degradation for algorithms. How handle such an open challenging issue. In this paper, we propose Evolution Graph Prediction (EvoGWP), novel method that can predict using delicately designed graph-based evolution learning algorithm. EvoGWP automatically extracts shapelets identify patterns workloads fine-grained level, predicts by considering factors both temporal spatial dimensions. We design two-level importance based shapelet extraction mechanism mine new pattern dimension, graph model fuse interference among different dimension. By combining from each single workloads, then spatio-temporal GNN-based encoder-decoder workloads. Experiments real trace data Alibaba, Tencent Google show improves up 58.6% over state-of-the-art methods. Moreover, outperform methods terms convergence. To best our knowledge, first work identifies accurately

Language: Английский

Citations

4

A Survey of Cloud Resource Consumption Optimization Methods DOI
Piotr Nawrocki,

Mateusz Smendowski

Journal of Grid Computing, Journal Year: 2025, Volume and Issue: 23(1)

Published: Jan. 8, 2025

Language: Английский

Citations

0

Multilayer multivariate forecasting network for precise resource utilization prediction in edge data centers DOI
Shivani Tripathi,

Priyadarshni Priyadarshni,

Rajiv Misra

et al.

Future Generation Computer Systems, Journal Year: 2025, Volume and Issue: 166, P. 107692 - 107692

Published: Jan. 10, 2025

Language: Английский

Citations

0

Workload Prediction in Cloud Data Centers Using Complex‐Valued Spatio‐Temporal Graph Convolutional Neural Network Optimized With Gazelle Optimization Algorithm DOI Open Access

R. Karthikeyan,

A. Saleem Raja,

V. Balamurugan

et al.

Transactions on Emerging Telecommunications Technologies, Journal Year: 2025, Volume and Issue: 36(3)

Published: March 1, 2025

ABSTRACT Workload prediction is the necessary factor in cloud data center for maintaining elasticity and scalability of resources. However, accuracy workload very low, because redundancy, noise, low center. In this manuscript, Prediction Cloud Data Centers using Complex‐Valued Spatio‐Temporal Graph Convolutional Neural Network Optimized with Gazelle Optimization Algorithm (CVSTGCN‐WLP‐CDC) proposed. Initially, input collected from two standard datasets such as NASA Saskatchewan HTTP traces dataset. Then, preprocessing Multi‐Window Savitzky–Golay Filter (MWSGF) used to remove noise redundant data. The preprocessed fed CVSTGCN a dynamic environment. work, proposed Approach (GOA) enhance weight bias parameters. CVSTGCN‐WLP‐CDC technique executed efficacy based on structure evaluated several performances metrics accuracy, recall, precision, energy consumption correlation coefficient, sum index (SEI), root mean square error (RMSE), squared (MPE), percentage (PER). provides 23.32%, 28.53% 24.65% higher accuracy; 22.34%, 25.62%, 22.84% lower when comparing existing methods Artificial Intelligence augmented evolutionary approach espoused centres architecture (TCNN‐CDC‐WLP), Performance analysis machine learning centered techniques (PA‐BPNN‐CWPC), Machine effectual utilization centers (ARNN‐EU‐CDC) respectively.

Language: Английский

Citations

0

Enhanced virtual machine migration for energy sustainability optimization in cloud computing through knowledge acquisition DOI Creative Commons

Doraid Seddiki,

Francisco Javier Maldonado Carrascosa, Sebastián García Galán

et al.

Computers & Electrical Engineering, Journal Year: 2024, Volume and Issue: 119, P. 109506 - 109506

Published: July 26, 2024

Cloud computing has revolutionized the way businesses and organizations manage their computational workloads. However, massive data centers that support cloud services consume a lot of energy, making energy sustainability critical concern. To address this challenge, article introduces an innovative approach to optimize consumption in environments through knowledge acquisition. The proposed method uses Knowledge Acquisition version Gray Wolf Optimizer (KAGWO) algorithm collect on availability use renewable within centers, contributing improved computing. KAGWO is introduced provide systematic for addressing complex problems by integrating global optimization principles, enhancing decision-making processes with fewer configuration parameters. This conducts comparative analysis between Swarm Intelligence Approach (KASIA) Genetic Algorithm (Pittsburgh) highlight benefits advantages former. By comparing performance KAGWO, Pittsburgh KASIA terms sustainability, study offers valuable insights into effectiveness knowledge-acquisition-based algorithms optimizing usage environments. results demonstrate outperforms offering more accurate acquisition capabilities, resulting enhanced sustainability. Overall, demonstrates substantial improvements ranging from 0.53% 5.23% over previous paper baselines, particular significance found slightly outperforming new small, medium large scenarios.

Language: Английский

Citations

3

A hybrid neural network and cooperative PSO model for dynamic cloud workloads prediction DOI
Jitendra Kumar, Deepika Saxena, Jatinder Kumar

et al.

Computing, Journal Year: 2025, Volume and Issue: 107(3)

Published: Feb. 15, 2025

Language: Английский

Citations

0