Fog
computing
has
become
an
attractive
method
for
different
IoT
(Internet
of
Things)
applications
that
require
low
latency
and
location
awareness.
It
provides
by
bringing
computational
power
to
the
edge
or
nearer
traffic
generators’
a
network
works
as
perfect
complement
cloud
computing.
Though
there
are
many
advantages
fog
computing,
due
limitations
resources
(CPU
processing
capacity,
bandwidth,
memory,
backup)
nodes,
framework
combating
these
is
highly
desired.
In
this
work,
we
formulate
optimization
model
cooperative
environment
dealing
with
dynamic
traffic.
We
analyzed
how
arrival
rates
impact
bandwidth
costs,
link
utilization,
server
resource
utilization.
By
adopting
techniques
rates,
utilization
layer
higher
than
resources,
shown
in
paper.
also
figured
out
blocking
within
acceptable
range
(0-15%).
Finally,
identified
driving
factors
associated
blocking,
case,
shortages
(network
resources),
which
responsible
generating
our
network.
The
healthcare
environment
is
one
of
the
applications
that
require
real-time
monitoring
to
immediately
process.
Fog
computing
works
in
a
and
offers
connected
devices
for
processing
data
with
low
latency
compared
cloud
model.
Load
balancing
an
important
term
fog
avoids
situations
overload
underload
nodes.
Many
Quality
Service
(QoS)
metrics
such
as
cost,
response
time,
throughput,
resource
utilization,
performance
can
be
improved
by
load
balancing.
In
this
paper,
we
proposed
mechanism
called
Remind
Weighted
Round
Robin
(RWRR)
algorithm
enhance
QoS
tasks
appropriate
node
based
on
capabilities
will
assigned
algorithm.
applied
order
system
environment.
Results
demonstrate
it
enhances
overall
20.05%,
average
time
120.25ms
when
related
work.
2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT),
Journal Year:
2024,
Volume and Issue:
unknown, P. 538 - 542
Published: April 6, 2024
Fog
computing
has
become
the
primary
paradigm
for
IoT
applications
as
it
meets
low-latency
needs
of
growing
number
applications.
However,
servers
can
get
overwhelmed
due
to
high
demand
fog
resources
in
several
complements
cloud
computing,
only
processes
user
requests
near
them.
Distributing
tasks
evenly
across
all
nodes
layer
helps
achieve
optimal
task
processing.
Load
balancing
fog-cloud
environment
aids
diminishing
energy
use.
In
this
article,
architecture
named
"EcoFogLoad
Architecture"
been
proposed
balance
workload
among
layer.
Along
with
this,
"Energy-Efficient
Workload
Optimization
(EEWO)"
algorithm
optimize
use
at
terms
cost,
time
delay
and
consumption.
iFogSim
used
execute
obtain
experimental
results.
The
results
approach
are
compared
those
other
existing
algorithms.
facilitates
resource
utilization,
reducing
latency
improving
service
quality.
article
concludes
by
presenting
potential
avenues
future
research.
This
study
aims
to
investigate
the
feasibility
concerns
of
metaheuristic
algorithms
involving
a
hybridisation
among
GPSO,
Adventure,
ACO-GA
and
FDE;
for
asset
allocation
with
regard
fog
computing
context.
These
evaluations
were
based
on
joining
speed,
arrangement
quality,
flexibility
strength
in
wide
testing.
Comparative
analysis
was
performed,
execution
their
related
works
within
field.
It
is
inferred
that
demonstrates
fantastic
meeting
speed;
code
needs
450
cycles
do
this
well
has
high
greatness
or
fitness
zero.
92ocuments
GPSO
FDE
are
closely
proximate,
showing
competitive
coalescence
along
design
optimization.
Adventure
programs
feature
slightly
less
intense
encounters
but
present
dynamic
exploration-exploitation
prospects.
In
case
if
adaptability,
trumps
score
92%
highlighting
its
resilience
larger
datasets.
Stability
reveals
have
little
deviation
folds,
stability.
The
discoveries
emphasize
nuanced
qualities
each
algorithm,
giving
profitable
bits
knowledge
professionals
computing.
results
contribute
progressing
talk
allotment,
directing
future
research
towards
refinement
application
hybrid
calculations
energetic
situations.
Computers, materials & continua/Computers, materials & continua (Print),
Journal Year:
2024,
Volume and Issue:
80(2), P. 2557 - 2578
Published: Jan. 1, 2024
In
recent
decades,
fog
computing
has
played
a
vital
role
in
executing
parallel
computational
tasks,
specifically,
scientific
workflow
tasks.
cloud
data
centers,
takes
more
time
to
run
applications.
Therefore,
it
is
essential
develop
effective
models
for
Virtual
Machine
(VM)
allocation
and
task
scheduling
environments.
Effective
scheduling,
VM
migration,
allocation,
altogether
optimize
the
use
of
resources
across
different
nodes.
This
process
ensures
that
tasks
are
executed
with
minimal
energy
consumption,
which
reduces
chances
resource
bottlenecks.
this
manuscript,
proposed
framework
comprises
two
phases:
(i)
using
fractional
selectivity
approach
(ii)
by
proposing
an
algorithm
name
Fitness
Sharing
Chaotic
Particle
Swarm
Optimization
(FSCPSO).
The
FSCPSO
integrates
concepts
chaos
theory
fitness
sharing
effectively
balance
both
global
exploration
local
exploitation.
enables
wide
range
solutions
leads
total
cost
makespan,
comparison
other
traditional
optimization
algorithms.
algorithm's
performance
analyzed
six
evaluation
measures
namely,
Load
Balancing
Level
(LBL),
Average
Resource
Utilization
(ARU),
cost,
response
time.
relation
conventional
algorithms,
achieves
higher
LBL
39.12%,
ARU
58.15%,
1175,
makespan
85.87
ms,
particularly
when
evaluated
50
Concurrency and Computation Practice and Experience,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Sept. 4, 2024
Summary
Cloud
computing
is
commonly
utilized
in
remote
contexts
to
handle
user
demands
for
resources
and
services.
Each
assignment
has
unique
processing
needs
that
are
determined
by
the
time
it
takes
complete.
However,
if
load
balancing
not
properly
managed,
effectiveness
of
may
suffer
dramatically.
Consequently,
cloud
service
providers
have
emphasize
rapid
precise
as
well
proper
resource
supply.
This
paper
proposes
a
novel
enhanced
deep
network‐based
predictor
cloud‐fog
In
prior,
workload
predicted
using
network
called
Multiple
Layers
Assisted
LSTM
(MLA‐LSTM)
model
considers
capacity
virtual
machine
(VM)
task
input
predicts
target
label
underload,
overload
equally
balanced.
According
this
prediction,
optimal
performed
through
hybrid
optimization
named
Osprey
Pelican
Optimization
Algorithm
(OAPOA)
while
taking
into
account
several
parameters
such
makespan,
execution
cost,
consumption,
server
load.
Additionally,
process
known
migration
carried
out,
which
machines
with
tasks
assigned
underload
tasks.
applied
optimally
via
OAPOA
strategy
under
consideration
constraints
including
cost
efficiency.