Network Computation in Neural Systems,
Journal Year:
2024,
Volume and Issue:
unknown, P. 1 - 31
Published: Oct. 9, 2024
An
efficient
resource
utilization
method
can
greatly
reduce
expenses
and
unwanted
resources.
Typical
cloud
planning
approaches
lack
support
for
the
emerging
paradigm
regarding
asset
management
speed
optimization.
The
use
of
computing
relies
heavily
on
task
allocation
scheduling
issue
is
more
crucial
in
arranging
allotting
application
jobs
supplied
by
customers
Virtual
Machines
(VM)
a
specific
manner.
needs
to
be
specifically
stated
increase
efficiency.
environment
model
developed
using
optimization
techniques.
This
intends
optimize
both
VM
placement
over
environment.
In
this
model,
new
hybrid-meta-heuristic
algorithm
named
Hybrid
Lemurs-based
Gannet
Optimization
Algorithm
(HL-GOA).
multi-objective
function
considered
with
constraints
like
cost,
time,
utilization,
makespan,
throughput.
proposed
further
validated
compared
against
existing
methodologies.
total
time
required
30.23%,
6.25%,
11.76%,
10.44%
reduced
than
ESO,
RSO,
LO,
GOA
2
VMs.
simulation
outcomes
revealed
that
effectively
resolved
VL
issues.
IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 11354 - 11377
Published: Jan. 1, 2024
Task
scheduling
is
a
crucial
challenge
in
cloud
computing
paradigm
as
variety
of
tasks
with
different
runtime
processing
capacities
generated
from
various
heterogeneous
devices
are
coming
up
to
application
console
which
effects
system
performance
terms
makespan,
resource
utilization,
cost.
Therefore,
traditional
algorithms
may
not
adapt
this
efficiently.
Many
existing
authors
developed
task
schedulers
by
using
metaheuristic
approaches
solve
problem(TSP)
get
near
optimal
solutions
but
still
TSP
highly
dynamic
challenging
scenario
it
NP
hard
problem.
To
tackle
challenge,
paper
introduces
multi
objective
prioritized
scheduler
improved
asynchronous
advantage
actor
critic(a3c)
algorithm
uses
priorities
based
on
length
tasks,
and
VMs
electricity
unit
cost
environment.
Scheduling
process
carried
out
two
stages.
In
the
first
stage,
all
incoming
VM
calculated
at
manager
level
second
Priorities
fed
(MOPTSA3C)
generate
decisions
map
effectively
onto
considering
schedule
cost,
makespan
available
Extensive
simulations
conducted
Cloudsim
toolkit
giving
input
trace
fabricated
data
distributions
real
time
worklogs
HPC2N,
NASA
datasets
scheduler.
For
evaluating
efficacy
proposed
MOPTSA3C,
compared
against
techniques
i.e.
DQN,
A2C,
MOABCQ.
From
results,
evident
that
MOPTSA3C
outperforms
for
reliability.
IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 67130 - 67148
Published: Jan. 1, 2024
The
increasing
demand
for
Cloud
service
with
sudden
resource
requirements
of
Virtual
Machines
(VMs)
different
types
and
sizes
may
create
an
unbalanced
state
in
the
datacenters.
In
turn,
it
will
lead
to
low
utilization
slow
down
server's
performance.
This
research
article
proposes
enhanced
version
Artificial
Rabbit
Optimization
(ARO)
called
Improved
based
on
Pattern
Search
(IARO-PS),
where
ARO
has
been
utilized
schedule
dynamically
independent
requests
(tasks)
overcoming
challenges
discussed
above
a
(PS)
method
hybridized
address
shortcomings
provide
better
exploration-exploitation
balance.
initial
step
proposed
approach
is
employ
load
balancing
strategy
by
dividing
workloads
(user
requests)
across
available
VMs.
next
utilizes
IARO-PS
map
onto
optimal
VMs
scheduling
process
carry
out
diverse
resources.
A
standard
benchmark
function
(CEC2017)
used
assess
technique's
efficacy.
comprehensive
evaluation
carried
taking
real-world
dataset
having
specifications
tasks
CloudSim
evaluate
performance
methodology.
Additionally,
simulation-based
comparison
various
metaheuristic-based
workload
methods
like
Genetic
Algorithm
(GA),
Bird
Swarm
(BSO),
Modified
Particle
Q-learning
(QMPSO),
Multi-Objectives
Grey
Wolf
Optimizer
(MGWO).
Based
simulations,
algorithm
performed
than
previously
mentioned
algorithms,
reducing
makespan
10.45%
2.31%
4.35%
(MGWO),
15.35%
4.17%
1.03%
1.44%
7.33%
both
homogeneous
heterogeneous
surroundings,
respectively,
improving
36.74%
14.31%
19.75%
45.23%
(BSO)
12.17%
6.02%
9.10%
19.39%
(BSO).
Furthermore,
statistical
through
Friedman's
test
Holm's
also
showcasing
decrease
increase
VM
utilization,
which
are
outcomes
simulated
experimental
study.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Feb. 20, 2025
The
fast
growth
of
the
Internet
Everything
(IoE)
has
resulted
in
an
exponential
rise
network
data,
increasing
demand
for
distributed
computing.
Data
collection
and
management
with
job
scheduling
using
wireless
sensor
networks
are
considered
essential
requirements
IoE
environment;
however,
security
issues
over
data
on
online
platform
energy
consumption
must
be
addressed.
Secure
Edge
Enabled
Multi-Task
Scheduling
(SEE-MTS)
model
been
suggested
to
properly
allocate
jobs
across
machines
while
considering
availability
relevant
copies.
proposed
approach
leverages
edge
computing
enhance
efficiency
applications,
addressing
growing
need
manage
huge
generated
by
devices.
system
ensures
user
protection
through
dynamic
updates,
multi-key
search
generation,
encryption,
verification
result
accuracy.
A
MTS
mechanism
is
employed
optimize
usage,
which
allocates
slots
various
processing
tasks.
Energy
assessed
tasks
queues,
preventing
node
overloading
minimizing
disruptions.
Additionally,
reinforcement
learning
techniques
applied
reduce
overall
task
completion
time
minimal
data.
Efficiency
have
improved
due
reduced
energy,
delay,
reaction,
times.
Results
indicate
that
SEE-MTS
achieves
utilization
4
J,
a
delay
2s,
reaction
4s,
at
89%,
level
96%.
With
computation
6s,
offers
security,
reducing
times,
although
real-world
implementation
may
limited
number
devices
incoming
Computers,
Journal Year:
2025,
Volume and Issue:
14(3), P. 81 - 81
Published: Feb. 24, 2025
Priority
in
task
scheduling
and
resource
allocation
for
cloud
computing
has
attracted
significant
attention
from
the
research
community.
However,
traditional
algorithms
often
lack
ability
to
differentiate
between
tasks
with
varying
levels
of
importance.
This
limitation
presents
a
challenge
when
servers
must
handle
diverse
distinct
priority
classes
strict
quality
service
requirements.
To
address
these
challenges
environments,
particularly
within
infrastructure
models,
we
propose
novel,
self-adaptive,
multiclass
algorithm
VM
clustering
allocation.
implements
four-tiered
prioritization
system
optimize
key
objectives,
including
makespan
energy
consumption,
while
simultaneously
optimizing
utilization,
degree
imbalance,
waiting
time.
Additionally,
load-balancing
model
based
on
technique.
The
proposed
work
was
validated
through
multiple
simulations
using
CloudSim
simulator,
comparing
its
performance
against
well-known
algorithms.
simulation
results
analysis
demonstrate
that
effectively
optimizes
consumption.
Specifically,
our
achieved
percentage
improvements
ranging
+0.97%
+26.80%
+3.68%
+49.49%
consumption
also
improving
other
metrics,
throughput,
load
balancing.
novel
demonstrably
enhances
efficiency,
complex
scenarios
tight
deadlines
priorities.
Concurrency and Computation Practice and Experience,
Journal Year:
2025,
Volume and Issue:
37(9-11)
Published: April 9, 2025
ABSTRACT
Rising
global
dependence
on
cloud
services
has
become
crucial
for
enterprises,
aiming
to
guarantee
continuous
data
accessibility
while
pursuing
enhanced
energy
efficiency
and
minimized
carbon
emissions
from
centers.
However,
the
persistent
challenge
of
high‐energy
consumption
in
these
facilities
necessitates
a
concentrated
approach
toward
reduction.
This
paper
introduces
an
innovative
multi‐objective
scheduling
strategy
scientific
workflows,
tailored
heterogeneous
computing
environments.
Our
method
employs
hybrid
genetic
algorithm,
incorporating
Hill
Climbing
generate
initial
population
chromosomes.
Subsequently,
algorithm
optimizes
task
assignments
most
suitable
virtual
machines,
utilizing
meticulously
designed
fitness
function
evaluate
each
chromosome's
suitability
solving
problem.
Through
extensive
experimentation,
we
demonstrate
that
our
proposed
outperforms
other
techniques
terms
solution
quality,
contributing
reduced
consumption,
processing
duration,
cost.
We
contend
this
holds
substantial
potential
mitigating
footprint
associated
with
centers,
offering
sustainable
environmentally
conscious
workflow
scheduling.
IEEE Access,
Journal Year:
2024,
Volume and Issue:
12, P. 23529 - 23548
Published: Jan. 1, 2024
Cloud
computing
has
been
imperative
for
systems
worldwide
since
its
inception.
The
researchers
strive
to
leverage
the
efficient
utilization
of
cloud
resources
execute
workload
quickly
in
addition
providing
better
quality
service.
Among
several
challenges
on
cloud,
task
scheduling
is
one
fundamental
NP-hard
problems.
Meta-heuristic
algorithms
are
extensively
employed
solve
as
a
discrete
optimization
problem
and
therefore
meta-heuristic
have
developed.
However,
they
their
own
strengths
weaknesses.
Local
optima,
poor
convergence,
high
execution
time,
scalability
predominant
issues
among
algorithms.
In
this
paper,
parallel
enhanced
whale
algorithm
proposed
schedule
independent
tasks
with
heterogeneous
resources.
improves
solution
diversity
avoids
local
optima
using
modified
encircling
maneuver
an
adaptive
bubble
net
attacking
mechanism.
parallelization
technique
keeps
time
low
despite
internal
complexity.
minimizes
makespan
while
improving
resource
throughput.
It
demonstrates
effectiveness
PEWOA
against
best
performing
(WOAmM)
Multi-core
Random
Matrix
Particle
Swarm
Optimization
(MRMPSO).
consistently
produces
results
varying
number
GoCJ
dataset,
indicating
scalability.
experiments
conducted
CloudSim
utilizing
variety
HCSP
instances.
Various
statistical
tests
also
evaluate
significance
results.