Research Square (Research Square),
Journal Year:
2022,
Volume and Issue:
unknown
Published: Sept. 14, 2022
Abstract
Virtual
Machine
(VM)
instance
price
prediction
in
cloud
computing
is
an
emerging
and
important
research
area.
VM
instance’s
used
for
different
purposes
such
as
reducing
energy
consumption,
maintaining
Service
Level
Agreement
(SLA),
balancing
workload
at
data
centers.
In
this
paper,
we
propose
a
Seasonal
Auto-Regressive
Moving
Average
(SARIMA)
based
prediction.
We
also
investigate
two
models
known
Auto
Regressive
Integrated
(ARIMA),
Long
ShortTerm
Memory
(LSTM).
The
experimental
results
show
that
the
proposed
SARIMA
(0,1,0)
(1,1,0)
model
outperforms
ARIMA
LSTM
with
MAPE
percentage
of
1.147.
Journal of King Saud University - Computer and Information Sciences,
Journal Year:
2023,
Volume and Issue:
35(5), P. 101549 - 101549
Published: April 20, 2023
Virtualization
technology
represented
through
Virtual
Machines
(VMs)
is
recognized
as
a
key
infrastructure
in
cloud
computing.
This
developing
rapidly
and
data
centers
face
challenges
such
Machine
Placement
(VMP)
for
energy
efficiency.
VMP
defined
the
efficient
allocation
of
VMs
to
Host
(HMs)
achieve
various
objectives
reducing
consumption,
load
balancing
avoid
Service
Level
Agreement
Violations
(SLAV).
In
this
paper,
addressed
using
Deep
Reinforcement
Learning
(DRL)
based
strategy
determine
best
mapping
between
HMs.
We
present
VMP-A3C,
an
effective
solve
Asynchronous
Advantage
Actor-Critic
(A3C)
algorithm
new
DRL
approach.
VMP-A3C
aims
at
HMs
without
SLAV,
where
consumption
reduced
much
possible.
learns
dynamically
consolidate
migration
techniques
minimum
number
believe
that
there
scope
improvements
shutting
down
little-workload
migration.
The
effectiveness
proposed
has
been
evaluated
from
aspects
deployment
rate,
shutdown
migrated
VMs.
main
difference
terms
required
existing
state-of-the-art
method
2.54%
7.14%,
respectively.
Future Internet,
Journal Year:
2024,
Volume and Issue:
16(3), P. 103 - 103
Published: March 19, 2024
The
adoption
of
edge
infrastructure
in
5G
environments
stands
out
as
a
transformative
technology
aimed
at
meeting
the
increasing
demands
latency-sensitive
and
data-intensive
applications.
This
research
paper
presents
comprehensive
study
on
intelligent
orchestration
computing
infrastructures.
proposed
Smart
Edge-Cloud
Management
Architecture,
built
upon
an
OpenNebula
foundation,
incorporates
ONEedge5G
experimental
component,
which
offers
workload
forecasting
automation
capabilities,
for
optimal
allocation
virtual
resources
across
diverse
locations.
evaluated
different
models,
based
both
traditional
statistical
techniques
machine
learning
techniques,
comparing
their
accuracy
CPU
usage
prediction
dataset
machines
(VMs).
Additionally,
integer
linear
programming
formulation
was
to
solve
optimization
problem
mapping
VMs
physical
servers
distributed
infrastructure.
Different
criteria
such
minimizing
server
usage,
load
balancing,
reducing
latency
violations
were
considered,
along
with
constraints.
Comprehensive
tests
experiments
conducted
evaluate
efficacy
architecture.
The Journal of Supercomputing,
Journal Year:
2022,
Volume and Issue:
79(6), P. 6674 - 6704
Published: Nov. 10, 2022
Abstract
Many
modern
applications,
both
scientific
and
commercial,
are
deployed
to
cloud
environments
often
employ
multiple
types
of
resources.
That
allows
them
efficiently
allocate
only
the
resources
which
actually
needed
achieve
their
goals.
However,
in
many
workloads
actual
usage
infrastructure
varies
over
time,
results
over-provisioning
unnecessarily
high
costs.
In
such
cases,
automatic
resource
scaling
can
provide
significant
cost
savings
by
provisioning
amount
necessary
support
current
workload.
Unfortunately,
due
complex
nature
distributed
systems,
remains
a
challenge.
Reinforcement
learning
domain
has
been
recently
very
active
field
research.
Thanks
combining
it
with
Deep
Learning,
newly
designed
algorithms
improve
state
art
domains.
this
paper
we
present
our
attempt
use
recent
advancements
Learning
optimize
running
compute-intensive
evolutionary
process
automating
heterogeneous
compute
environment.
We
describe
architecture
system
evaluation
results.
The
experiments
include
autonomous
management
sample
workload
comparison
its
performance
traditional
threshold-based
approach.
also
details
training
policy
using
proximal
optimization
algorithm.
Finally,
discuss
feasibility
extend
presented
approach
further
scenarios.
International Journal of Advanced Computer Science and Applications,
Journal Year:
2024,
Volume and Issue:
15(3)
Published: Jan. 1, 2024
The
rapid
demand
for
cloud
services
has
provoked
providers
to
efficiently
resolve
the
problem
of
Virtual
Machines
Placement
in
cloud.
This
paper
presents
a
VM
using
Reinforcement
Learning
that
aims
provide
optimal
resource
and
energy
management
data
centers.
provides
better
decision-making
as
it
solves
complexity
caused
due
tradeoff
among
objectives
hence
is
useful
mapping
requested
on
minimum
number
Physical
Machines.
An
enhanced
Tournament-based
selection
strategy
along
with
Roulette
Wheel
sampling
been
applied
ensure
optimization
goes
through
balanced
exploration
exploitation,
thereby
giving
solution
quality.
Two
heuristics
have
used
ordering
VM,
considering
impact
CPU
memory
utilizations
over
placement.
Moreover,
concept
Pareto
approximate
set
considered
both
are
prioritized
according
perspective
users.
proposed
technique
implemented
MATLAB
2020b.
Simulation
analysis
showed
VMRL
performed
preferably
well
shown
improvement
17%,
20%
18%
terms
consumption,
utilization
fragmentation
respectively
comparison
other
multi-objective
algorithms.
Computing,
Journal Year:
2024,
Volume and Issue:
106(9), P. 3031 - 3062
Published: July 8, 2024
Abstract
One
of
the
preconditions
for
efficient
cloud
computing
services
is
continuous
availability
to
clients.
However,
there
are
various
reasons
temporary
service
unavailability
due
routine
maintenance,
load
balancing,
cyber-attacks,
power
management,
fault
tolerance,
emergency
incident
response,
and
resource
usage.
Live
Virtual
Machine
Migration
(LVM)
an
option
address
by
moving
virtual
machines
between
hosts
without
disrupting
running
services.
Pre-copy
memory
migration
a
common
LVM
approach
used
in
systems,
but
it
faces
challenges
high
rate
frequently
updated
pages
known
as
dirty
pages.
Transferring
these
during
pre-copy
prolongs
overall
time.
If
large
numbers
remaining
after
predefined
iteration
page
transfer,
stop-and-copy
phase
initiated,
which
significantly
increases
downtime
negatively
impacts
availability.
To
mitigate
this
issue,
we
introduce
prediction-based
that
optimizes
process
dynamically
halting
when
predicted
falls
below
threshold.
Our
proposed
machine
learning
method
was
rigorously
evaluated
through
experiments
conducted
on
dedicated
testbed
using
KVM/QEMU
technology,
involving
different
VM
sizes
memory-intensive
workloads.
A
comparative
analysis
against
methods
default
reveals
remarkable
improvement,
with
average
64.91%
reduction
RAM
configurations
high-write-intensive
workloads,
along
total
time
approximately
85.81%.
These
findings
underscore
practical
advantages
our
reducing
disruptions
live
systems.