Fog
computing
has
become
an
attractive
method
for
different
IoT
(Internet
of
Things)
applications
that
require
low
latency
and
location
awareness.
It
provides
by
bringing
computational
power
to
the
edge
or
nearer
traffic
generators’
a
network
works
as
perfect
complement
cloud
computing.
Though
there
are
many
advantages
fog
computing,
due
limitations
resources
(CPU
processing
capacity,
bandwidth,
memory,
backup)
nodes,
framework
combating
these
is
highly
desired.
In
this
work,
we
formulate
optimization
model
cooperative
environment
dealing
with
dynamic
traffic.
We
analyzed
how
arrival
rates
impact
bandwidth
costs,
link
utilization,
server
resource
utilization.
By
adopting
techniques
rates,
utilization
layer
higher
than
resources,
shown
in
paper.
also
figured
out
blocking
within
acceptable
range
(0-15%).
Finally,
identified
driving
factors
associated
blocking,
case,
shortages
(network
resources),
which
responsible
generating
our
network.
Journal of Electrical and Computer Engineering,
Journal Year:
2023,
Volume and Issue:
2023, P. 1 - 11
Published: Sept. 28, 2023
The
development
of
the
fifth
generation
(5G)
and
sixth
(6G)
wireless
networks
has
gained
wide
spread
importance
in
all
aspects
life
through
network
due
to
their
significantly
higher
speeds,
extraordinarily
low
latency,
ubiquitous
availability.
Owing
users,
components,
services
our
everyday
lives,
must
secure
these.
With
such
a
range
devices
service
types
being
present
5G
ecosystem,
security
issues
are
now
much
more
prevalent.
Security
solutions,
not
implemented,
already
be
envisioned
order
deal
with
attacks
on
numerous
services,
cutting-edge
technology,
user
information
available
over
network.
This
research
proposes
dual
integrated
neural
(DINN)
for
data
transmission
networks.
DINN
comprises
two
based
sparse
dense
dimensions.
is
designed
any
presence
deep
learning-based
attack
physical
layer.
evaluated
considering
various
machine
learning
as
basic_iterative_method
attack,
momentum_iterative_method
post_gradient_descent
C&W
attack;
comparison
carried
out
existing
DINN,
success
rate
MSE.
Performance
analysis
suggests
that
holds
level
against
above
attacks.
ACTA IMEKO,
Journal Year:
2024,
Volume and Issue:
13(2), P. 1 - 15
Published: May 21, 2024
Recently,
there
has
been
an
increase
in
concerns
about
the
accessibility,
security,
and
reliability
of
aviation
engines.
To
prevent
engine
failures
which
can
be
quite
serious,
it
is
important
to
take
effective
measures.
The
objective
create
a
deep
learning
simulation
that
accurately
predict
aircraft
engine's
viability
remaining
usefulness
using
meta-heuristic
techniques
improve
its
performance.
These
discover
optimal
hyper
parameters
architecture
for
model.
This
will
help
minimize
downtime
maintenance
costs
fleet
by
handling
complex
data
such
as
sensor
readings
past
records
while
also
adapting
changing
conditions
over
time.
Since
training
models
computationally
intensive,
methods
their
robustness.
aim
enhance
performance
increasing
accuracy
rate
reducing
mean
squared
losses
multiple
used
predicting
hybridizing
them
with
metaheuristic
algorithms.
International Journal of Advanced Computer Science and Applications,
Journal Year:
2024,
Volume and Issue:
15(2)
Published: Jan. 1, 2024
In
this
research
paper,
we
delve
into
the
innovative
realm
of
optimizing
load
balancing
in
Data
Center
Networks
(DCNs)
by
leveraging
capabilities
Software-Defined
Networking
(SDN)
and
machine
learning
algorithms.
Traditional
DCN
architectures
face
significant
challenges
handling
unpredictable
traffic
patterns,
leading
to
bottlenecks,
network
congestion,
suboptimal
utilization
resources.
Our
study
proposes
a
novel
framework
that
integrates
flexibility
programmability
SDN
with
predictive
analytical
prowess
learning.
We
employed
multi-layered
methodology,
initially
constructing
virtualized
environment
simulate
real-world
scenarios,
followed
implementation
controllers
instill
adaptiveness
programmability.
Subsequently,
integrated
models,
training
them
on
substantial
dataset
encompassing
diverse
patterns
conditions.
The
crux
our
approach
was
application
these
trained
models
anticipate
congestion
dynamically
adjust
flows,
ensuring
efficient
distribution
among
servers.
A
comparative
analysis
conducted
against
prevailing
methods,
revealing
model's
superiority
terms
latency
reduction,
enhanced
throughput,
improved
resource
allocation.
Furthermore,
illuminates
potential
for
learning's
self-learning
mechanism
foresee
adapt
future
states
or
exigencies,
marking
advancement
from
reactive
proactive
management.
This
convergence
learning,
as
demonstrated,
ushers
new
era
intelligent,
scalable,
highly
reliable
DCNs,
demanding
further
exploration
investment
future-ready
data
centers.
BIO Web of Conferences,
Journal Year:
2024,
Volume and Issue:
97, P. 00036 - 00036
Published: Jan. 1, 2024
With
the
rapid
advance
of
Internet
Things
(IoT),
technology
has
entered
a
new
era.
It
is
changing
way
smart
devices
relate
to
such
fields
as
healthcare,
cities,
and
transport.
However,
expansion
also
challenges
data
processing,
latency,
QoS.
This
paper
aims
consider
fog
computing
key
solution
for
addressing
these
problems,
with
special
emphasis
on
function
load
balancing
improve
quality
service
in
IoT
environments.
In
addition,
we
study
relationship
between
computing,
highlighting
why
latter
acts
an
intermediate
layer
that
can
not
only
reduce
delays
but
achieve
efficient
processing
by
moving
computational
resources
closer
where
they
are
needed.
Its
essence
analyze
various
algorithms
their
impact
environments
performance
applications.
Static
dynamic
strategies
have
been
tested
terms
throughput,
energy
efficiency,
overall
system
reliability.
Ultimately,
methods
this
sort
better
than
static
ones
managing
scenarios
since
sensitive
workloads
changes
system.
The
discusses
state
art
solutions,
secure
sustainable
techniques
Edge
Data
Centers
(EDCs),
manages
allocation
scheduling.
We
aim
provide
general
overview
important
recent
developments
literature
while
pointing
out
limitation
improvements
might
be
made.
To
end,
set
understand
describe
its
importance
improving
thus
hope
understanding
technologies
lead
us
towards
more
resilient
systems.
The
healthcare
environment
is
one
of
the
applications
that
require
real-time
monitoring
to
immediately
process.
Fog
computing
works
in
a
and
offers
connected
devices
for
processing
data
with
low
latency
compared
cloud
model.
Load
balancing
an
important
term
fog
avoids
situations
overload
underload
nodes.
Many
Quality
Service
(QoS)
metrics
such
as
cost,
response
time,
throughput,
resource
utilization,
performance
can
be
improved
by
load
balancing.
In
this
paper,
we
proposed
mechanism
called
Remind
Weighted
Round
Robin
(RWRR)
algorithm
enhance
QoS
tasks
appropriate
node
based
on
capabilities
will
assigned
algorithm.
applied
order
system
environment.
Results
demonstrate
it
enhances
overall
20.05%,
average
time
120.25ms
when
related
work.
Advances in systems analysis, software engineering, and high performance computing book series,
Journal Year:
2024,
Volume and Issue:
unknown, P. 73 - 89
Published: March 4, 2024
In
the
evolving
landscape
of
distributed
computing,
integration
edge
devices
with
traditional
cloud
infrastructures
necessitates
innovative
approaches
to
harness
their
combined
computational
prowess.
Osmotic
a
paradigm
that
promises
such
integration,
has
transitioned
from
theoretical
frameworks
tangible
implementations.
This
chapter
provides
comprehensive
examination
osmotic
tracing
its
journey
conceptual
underpinnings
current
real-world
applications.
Central
computing
is
deployment
microservices—modular,
autonomous
units
computation—strategically
positioned
across
edge-cloud
continuum
based
on
immediate
needs
and
resource
availabilities.
review
elucidates
foundational
principles
distinguishing
characteristics,
challenges
encountered
in
practical
adoption,
demonstrable
benefits
scenarios.
Concurrency and Computation Practice and Experience,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Sept. 4, 2024
Summary
Cloud
computing
is
commonly
utilized
in
remote
contexts
to
handle
user
demands
for
resources
and
services.
Each
assignment
has
unique
processing
needs
that
are
determined
by
the
time
it
takes
complete.
However,
if
load
balancing
not
properly
managed,
effectiveness
of
may
suffer
dramatically.
Consequently,
cloud
service
providers
have
emphasize
rapid
precise
as
well
proper
resource
supply.
This
paper
proposes
a
novel
enhanced
deep
network‐based
predictor
cloud‐fog
In
prior,
workload
predicted
using
network
called
Multiple
Layers
Assisted
LSTM
(MLA‐LSTM)
model
considers
capacity
virtual
machine
(VM)
task
input
predicts
target
label
underload,
overload
equally
balanced.
According
this
prediction,
optimal
performed
through
hybrid
optimization
named
Osprey
Pelican
Optimization
Algorithm
(OAPOA)
while
taking
into
account
several
parameters
such
makespan,
execution
cost,
consumption,
server
load.
Additionally,
process
known
migration
carried
out,
which
machines
with
tasks
assigned
underload
tasks.
applied
optimally
via
OAPOA
strategy
under
consideration
constraints
including
cost
efficiency.