Cloud
computing
is
a
powerful
technology
that
rapidly
growing
in
popularity
with
energy
consumption
being
major
concern.
data
centers
consume
lot
of
energy,
which
can
lead
to
high
operating
costs
and
carbon
emissions.
This
significant
bottleneck
restricting
the
development
cloud
computing.
If
people
continue
rely
on
traditional
centers,
environmental
impact
will
only
get
worse.
Green
solutions
are
needed
address
this
challenge.
designed
utilize
resources
while
minimizing
efficiently.
providers
offer
different
levels
service,
defined
service-level
agreements
(SLAs).
SLAs
typically
include
things
like
availability
response
time
for
requests,
assigned
clients.
try
best
exploit
their
by
allocating
them
users
enhances
utilization
lowers
idle
time.
be
done
using
scheduling
algorithms.
In
proposed
work"
or
servers
maintain
clients
read
write.
Servers
classified
as
read-and-write
separately
based
capacity,
tasks
accordingly.
The
model
uses
every
server's
full
potential,
resulting
an
environment
consumes
less
energy.
objective
Computing
Resource
Scheduling
(GCRS)
reduce
consumed
emitted
center
through
resources.
Egyptian Informatics Journal,
Journal Year:
2023,
Volume and Issue:
24(2), P. 277 - 290
Published: April 18, 2023
It
is
challenging
to
handle
the
non-linear
power
consumption
model,
complex
workflow
structures,
and
diverse
user-defined
deadlines
for
energy-efficient
scheduling
in
sustainable
cloud
computing.
Although
metaheuristics
are
very
attractive
solve
this
problem,
most
of
existing
work
regards
problem
as
a
black-box
ignores
use
domain
knowledge.
To
make
up
their
shortcomings,
paper
tailors
an
energy-aware
intelligent
algorithm
(EIS)
with
three
new
mechanisms.
First,
we
derive
optimal
execution
time
that
minimizes
energy
each
task
on
given
resource.
Second,
based
task,
EIS
distributes
slack
(difference
between
its
completion
deadline)
reduce
voltages
frequencies
executions
saving.
Third,
mines
idle
gaps
caused
by
precedence
constraints
further
dynamic
whilst
satisfying
workflows'
deadline
constraints.
measure
performance
EIS,
conduct
extensive
comparison
experiments
actual
applications.
The
results
demonstrate
much
lower
than
competitors
under
different
deadlines,
has
faster
descend
rate
evolution
process.
Symmetry,
Journal Year:
2025,
Volume and Issue:
17(2), P. 280 - 280
Published: Feb. 12, 2025
With
the
increasing
volume
of
scientific
computation
data
and
advancement
computer
performance,
is
becoming
more
dependent
on
powerful
computing
capabilities
cloud
computing.
On
platforms,
tasks
in
workflows
are
assigned
to
computational
resources
executed
according
specific
strategies.
Therefore,
workflow
scheduling
has
become
a
key
factor
affecting
efficiency.
This
paper
proposes
hybrid
algorithm,
HICA,
address
problem
symmetric
homogeneous
environments
with
optimization
goals
makespan
cost.
HICA
combines
Imperialist
Competitive
Algorithm
(ICA)
HEFT
integrating
into
initial
population
ICA
accelerate
convergence
ICA.
Experimental
results
show
that
proposed
approach
outperforms
other
algorithms
real-world
applications.
Specifically,
when
scale
100,
average
improvements
cost
133.89
273.33,
respectively;
1000,
371.62
9178.98.
The
for
Earth
System
Model
parameter
tuning
compared
scenario
without
using
were
improved
by
13%
21%,
respectively.
Journal of Intelligent Systems,
Journal Year:
2025,
Volume and Issue:
34(1)
Published: Jan. 1, 2025
Abstract
The
concept
of
cloud
computing
has
completely
changed
how
computational
resources
are
delivered
and
used.
By
enabling
on-demand
access
to
collective
through
the
internet.
While
this
technological
shift
offers
unparalleled
flexibility,
it
also
brings
considerable
challenges,
especially
in
scheduling
resource
allocation,
particularly
when
optimizing
multiple
objectives
a
dynamic
environment.
Efficient
allocation
critical
computing,
as
they
directly
impact
system
performance,
utilization,
cost
efficiency
heterogeneous
conditions.
Existing
approaches
often
face
difficulties
balancing
conflicting
objectives,
such
reducing
task
completion
time
while
staying
within
budget
constraints
or
minimizing
energy
consumption
maximizing
utilization.
As
result,
many
solutions
fall
short
optimal
leading
increased
costs
degraded
performance.
This
systematic
literature
review
(SLR)
focuses
on
research
conducted
between
2019
2023
Following
preferred
reporting
items
for
reviews
meta-analyses
guidelines,
ensures
transparent
replicable
process
by
employing
inclusion
criteria
bias.
explores
key
concepts
management
classifies
existing
strategies
into
mathematical,
heuristic,
hyper-heuristic
approaches.
It
evaluates
popular
algorithms
designed
optimize
metrics
consumption,
reduction,
makespan
minimization,
performance
satisfaction.
Through
comparative
analysis,
SLR
discusses
strengths
limitations
various
schemes
identifies
emerging
trends.
underscores
steady
growth
field,
emphasizing
importance
developing
efficient
address
complexities
modern
systems.
findings
provide
comprehensive
overview
current
methodologies
pave
way
future
aimed
at
tackling
unresolved
challenges
management.
work
serves
valuable
practitioners
academics
seeking
environments,
contributing
advancements
computing.
Journal of Network and Computer Applications,
Journal Year:
2022,
Volume and Issue:
203, P. 103400 - 103400
Published: April 29, 2022
Processing
large
scientific
applications
generates
a
huge
amount
of
data,
which
makes
running
experiments
in
the
cloud
computing
environment
very
expensive
and
energy-consuming.
To
find
an
optimal
solution
to
workflow
scheduling
problem,
several
approaches
have
been
presented
for
on
resources.
However,
more
efficient
are
needed
improve
service
delivery.
In
this
paper,
energy-efficient
virtual
machine
mapping
algorithm
(EViMA)
is
proposed
resource
management
achieve
effective
that
reduces
data
center
energy
consumption,
execution
makespan,
cost.
This
ensures
requirements
users
met,
improves
quality
services
offered
by
providers.
Our
mechanism
considers
heterogeneity
from
both
users'
applications'
perspectives.
Through
simulation
real
datasets,
EViMA
can
provide
better
solutions
providers
reducing
cost
than
state-of-the-art.