IEEE Transactions on Parallel and Distributed Systems, Год журнала: 2024, Номер 36(2), С. 168 - 184
Опубликована: Окт. 14, 2024
Язык: Английский
IEEE Transactions on Parallel and Distributed Systems, Год журнала: 2024, Номер 36(2), С. 168 - 184
Опубликована: Окт. 14, 2024
Язык: Английский
Complex System Modeling and Simulation, Год журнала: 2021, Номер 1(4), С. 257 - 270
Опубликована: Дек. 1, 2021
As the critical component of manufacturing systems, production scheduling aims to optimize objectives in terms profit, efficiency, and energy consumption by reasonably determining main factors including processing path, machine assignment, execute time so on. Due large scale strongly coupled constraints nature, as well real-time solving requirement certain scenarios, it faces great challenges problems. With development learning, Reinforcement Learning (RL) has made breakthroughs a variety decision-making For problems, this paper we summarize designs state action, tease out RL-based algorithm for scheduling, review applications RL different types then discuss fusion modes reinforcement learning meta-heuristics. Finally, analyze existing problems current research, point future research direction significant contents promote optimization.
Язык: Английский
Процитировано
187The Computer Journal, Год журнала: 2022, Номер 65(11), С. 2909 - 2925
Опубликована: Авг. 24, 2022
Abstract Quality of data services is crucial for operational large-scale internet-of-things (IoT) research infrastructure, in particular when serving large amounts distributed users. Effectively detecting runtime anomalies and diagnosing their root cause helps to defend against adversarial attacks, thereby essentially boosting system security robustness the IoT infrastructure services. However, conventional anomaly detection methods are inadequate facing dynamic complexities these systems. In contrast, supervised machine learning unable exploit due unavailability labeled data. This paper leverages popular GAN-based generative models end-to-end one-class classification improve unsupervised detection. A novel heterogeneous BiGAN-based model Heterogeneous Temporal Anomaly-reconstruction GAN (HTA-GAN) proposed make better use a classifier scoring function. The Generator-Encoder-Discriminator BiGAN structure can lead practical score computation temporal feature capturing. We empirically compare approach with several state-of-the-art on real-world datasets, benchmarks synthetic datasets. results show that HTA-GAN outperforms its competitors demonstrates robustness.
Язык: Английский
Процитировано
100Journal of Systems and Software, Год журнала: 2021, Номер 184, С. 111124 - 111124
Опубликована: Окт. 22, 2021
Язык: Английский
Процитировано
69Journal of Network and Computer Applications, Год журнала: 2023, Номер 216, С. 103648 - 103648
Опубликована: Май 4, 2023
In recent years, the landscape of computing paradigms has witnessed a gradual yet remarkable shift from monolithic to distributed and decentralized such as Internet Things (IoT), Edge, Fog, Cloud, Serverless. The frontiers these technologies have been boosted by manually encoded algorithms Artificial Intelligence (AI)-driven autonomous systems for optimum reliable management resources. Prior work focuses on improving existing using AI across wide range domains, efficient resource provisioning, application deployment, task placement, service management. This survey reviews evolution data-driven AI-augmented their impact systems. We demystify new techniques draw key insights in Fog Cloud management-related uses methods also look at how can innovate traditional applications enhanced Quality Service (QoS) presence continuum present latest trends areas optimizing models that are deployed or layout roadmap future research directions QoS optimization reliability. Finally, we discuss blue-sky ideas envision this an anchor point AI-driven
Язык: Английский
Процитировано
39IEEE Transactions on Cloud Computing, Год журнала: 2022, Номер 11(2), С. 1871 - 1885
Опубликована: Апрель 21, 2022
As
the
workloads
and
service
requests
in
cloud
computing
environments
change
constantly,
cloud-based
software
services
need
to
adaptively
allocate
resources
for
ensuring
Quality-of-Service
(QoS)
while
reducing
resource
costs.
However,
it
is
very
challenging
achieve
adaptive
allocation
with
complex
variable
system
states.
Most
of
existing
methods
only
consider
current
condition
workloads,
thus
cannot
well
adapt
real-world
subject
fluctuating
workloads.
To
address
this
challenge,
we
propose
a
novel
Deep
Reinforcement
learning
based
Allocation
method
workload-time
Windows
(DRAW)
that
considers
both
future
process.
Specifically,
an
original
Q-Network
(DQN)
prediction
model
management
operations
trained
on
windows,
which
can
be
used
predict
appropriate
under
different
Next,
new
feedback-control
mechanism
designed
construct
objective
plan
state
through
iterative
execution
operations.
Extensive
simulation
results
demonstrate
accuracy
generated
by
proposed
DRAW
reach
90.69%.
Moreover,
optimal/near-optimal
performance
outperform
other
classic
3
Язык: Английский
Процитировано
31Computer Communications, Год журнала: 2023, Номер 200, С. 86 - 94
Опубликована: Янв. 9, 2023
Язык: Английский
Процитировано
15IEEE Access, Год журнала: 2024, Номер 12, С. 11354 - 11377
Опубликована: Янв. 1, 2024
Task scheduling is a crucial challenge in cloud computing paradigm as variety of tasks with different runtime processing capacities generated from various heterogeneous devices are coming up to application console which effects system performance terms makespan, resource utilization, cost. Therefore, traditional algorithms may not adapt this efficiently. Many existing authors developed task schedulers by using metaheuristic approaches solve problem(TSP) get near optimal solutions but still TSP highly dynamic challenging scenario it NP hard problem. To tackle challenge, paper introduces multi objective prioritized scheduler improved asynchronous advantage actor critic(a3c) algorithm uses priorities based on length tasks, and VMs electricity unit cost environment. Scheduling process carried out two stages. In the first stage, all incoming VM calculated at manager level second Priorities fed (MOPTSA3C) generate decisions map effectively onto considering schedule cost, makespan available Extensive simulations conducted Cloudsim toolkit giving input trace fabricated data distributions real time worklogs HPC2N, NASA datasets scheduler. For evaluating efficacy proposed MOPTSA3C, compared against techniques i.e. DQN, A2C, MOABCQ. From results, evident that MOPTSA3C outperforms for reliability.
Язык: Английский
Процитировано
5Computers in Biology and Medicine, Год журнала: 2024, Номер 172, С. 108152 - 108152
Опубликована: Фев. 13, 2024
Язык: Английский
Процитировано
4Future Internet, Год журнала: 2024, Номер 16(3), С. 103 - 103
Опубликована: Март 19, 2024
The adoption of edge infrastructure in 5G environments stands out as a transformative technology aimed at meeting the increasing demands latency-sensitive and data-intensive applications. This research paper presents comprehensive study on intelligent orchestration computing infrastructures. proposed Smart Edge-Cloud Management Architecture, built upon an OpenNebula foundation, incorporates ONEedge5G experimental component, which offers workload forecasting automation capabilities, for optimal allocation virtual resources across diverse locations. evaluated different models, based both traditional statistical techniques machine learning techniques, comparing their accuracy CPU usage prediction dataset machines (VMs). Additionally, integer linear programming formulation was to solve optimization problem mapping VMs physical servers distributed infrastructure. Different criteria such minimizing server usage, load balancing, reducing latency violations were considered, along with constraints. Comprehensive tests experiments conducted evaluate efficacy architecture.
Язык: Английский
Процитировано
4Transactions on Emerging Telecommunications Technologies, Год журнала: 2025, Номер 36(4)
Опубликована: Март 20, 2025
ABSTRACT Cloud‐based computing is an innovative model that utilizes a variety of self‐driving devices and adaptable structures. Efficient cloud relies on the critical step scheduling tasks. In order to decrease energy use increase service providers' profits by speeding up processing, task planning remains crucial. Scheduling tasks represents one crucial operations in cloud. The main challenge allocate complete suitable Virtual Machine (VM) while ensuring profitability. Various techniques ensure Quality Service (QoS), but as scaling increases, becomes more challenging. Hence, there need for enhanced scheduling. Previous studies did not cover VM migration, which effectively address resource utilization efficiency. An advanced deep learning with heuristic algorithm suggested improve process. This aims predict data assist migration through derivation multi‐objective function. Initially, are gathered from benchmark sources. Further, prediction carried out Multiscale Dilated Recurrent Neural Network (MDRNN). To derive function, Water Strider‐based Dingo Optimization Algorithm (WS‐DOA) proposed. Following prediction, performed WS‐DOA function considering constraints like cost, consumption, response time, security. Likewise, involves formulating objective WS‐DOA, make span cost. Finally, proposed examined using diverse metrics. On contrary, method evinces it acquires higher results migration.
Язык: Английский
Процитировано
0