Indian Journal of Science and Technology,
Journal Year:
2024,
Volume and Issue:
17(45), P. 4722 - 4731
Published: Dec. 14, 2024
Objectives:
To
evaluate
the
efficiency
of
task
prediction
and
resource
allocation
for
load
balancing
(LB)
in
cloud
environment
using
combined
approach
like
random
Forest(RF)
Particle
Swarm
optimization
Convolutional
Neural
Networks
(PSO-CNN)
allocation.
Methods:
The
ensemble
present
study
uses
Random
Forest
(RF),
a
machine
learning
(ML)
model
Optimization
(PSO+CNN),
bio-inspired
algorithm
Deep
Learning
(DL)
employs
PSO
techniques
to
optimize
CNN
order
address
investigation
algorithmic
DL.
results
show
that
suggested
outperforms
other
models
CNN-LSTM(Long
Short-term
memory),
CNN-GRU(Gated
Recurrent
Unit),
–SVM(Support
Vector
Machine)
increase
performance
efficacy
systems.
experiment
is
implemented
Python
assessed
Google
Cluster
dataset
accessible
public.
Findings:
use
ML
DL
are
found
be
more
efficient
infrastructure
than
conventional
methods.
examines
RF,
hybrid
RF-PSO-CNN
models.
accuracy,
precision,
F1.
Score
metrics
were
used
assess
classification
recommended
them
with
an
accuracy
90%
contrasted
methods
CNN-LSTM,
CNN-
GRU
PSO-SVM.
As
result,
both
assessment
consumption
proposed
performs
effectively.
Novelty:
novel
suggests
LB
Computing.
predicted
by
RF
assigned
chosen
CNN,
thereby
improving
Most
research
any
two
or
either
predicting
tasks
scheduled
which
allocate.
combination
(RF)
method,
(PSO)
(CNN)
concurrently
it
effectiveness
context.
Keywords:
Load
Balancing
(LB),
Task
scheduling,
Resource
allocation,
(CNN),
2022 9th International Conference on Computing for Sustainable Global Development (INDIACom),
Journal Year:
2024,
Volume and Issue:
unknown, P. 1681 - 1686
Published: Feb. 28, 2024
The
advent
of
digital
transformation
has
revolutionized
the
way
businesses
operate.
Applications
have
become
focal
point
this
transformation,
shifting
focus
from
being
organization-centric
to
user-centric.
To
realize
full
potential
businesses,
high-quality,
secure,
and
agile
applications
are
essential.
Containers
a
cutting-edge
invention
in
world
virtualization,
gaining
immense
popularity
recent
years.
They
replaced
traditional
business
continuity
solutions
now
used
address
highly
demanding
needs.
Multiple
containers
orchestration
frameworks
available,
both
as
standalone
cloud-based
services.
However,
developers
industry
experts
face
challenges
identifying
evaluating
appropriate
for
their
application
selected
tools
may
not
always
be
feasible
due
lack
available
features,
inability
provide
agility,
platform
support.
When
deployed
across
multiple
containers,
coordination
management
among
container
clusters
critical.
Since
play
pivotal
role
edge
deployment,
cloud-native,
continuous
integration,
it
is
vital
centralized
proper
re-source
scheduling.
This
paper
discusses
compares
emerging
platforms
cloud-centric
frameworks,
highlighting
involved.
Applied Sciences,
Journal Year:
2023,
Volume and Issue:
13(21), P. 12015 - 12015
Published: Nov. 3, 2023
Task
scheduling
poses
a
wide
variety
of
challenges
in
the
cloud
computing
paradigm,
as
heterogeneous
tasks
from
resources
come
onto
platforms.
The
most
important
challenge
this
paradigm
is
to
avoid
single
points
failure,
various
users
are
running
at
provider,
and
it
very
improve
fault
tolerance
maintain
negligible
downtime
order
render
services
range
customers
around
world.
In
paper,
tackle
challenge,
we
precisely
calculated
priorities
for
virtual
machines
(VMs)
based
on
unit
electricity
cost
these
fed
scheduler.
This
scheduler
modeled
using
deep
reinforcement
learning
technique
which
known
DQN
model
make
decisions
generate
schedules
optimally
VMs
research
extensively
conducted
Cloudsim.
research,
real-time
dataset
Google
Cloud
Jobs
used
given
input
algorithm.
carried
out
two
phases
by
categorizing
regular
or
large
with
fixed
varied
both
datasets.
Our
proposed
DRFTSA
compared
existing
state-of-the-art
approaches,
i.e.,
PSO,
ACO,
GA
algorithms,
results
reveal
that
minimizes
makespan
GA,
ACO
30.97%,
35.1%,
37.12%,
rates
failure
39.4%,
44.13%,
46.19%,
energy
consumption
18.81%,
23.07%,
28.8%,
respectively,
datasets
VMs.