IET Networks,
Год журнала:
2024,
Номер
13(4), С. 301 - 312
Опубликована: Март 12, 2024
Abstract
Industrial
IoT
(IIoT)
applications
are
widely
used
in
multiple
use
cases
to
automate
the
industrial
environment.
Industry
4.0
presents
challenges
numerous
areas,
including
heterogeneous
data,
efficient
data
sensing
and
collection,
real‐time
processing,
higher
request
arrival
rates,
due
massive
amount
of
data.
Building
a
time‐sensitive
network
that
supports
voluminous
dynamic
traffic
from
is
complex.
Therefore,
authors
provide
insights
into
networks
propose
strategy
for
enhanced
management.
An
multivariate
forecasting
model
adapts
Multivariate
Singular
Spectrum
Analysis
employed
an
SDN‐based
IIoT
network.
The
proposed
method
considers
flow
parameters,
such
as
packet
sent
received,
bytes
source
rate,
round
trip
time,
jitter,
rate
duration
predict
future
flows.
experimental
results
show
can
effectively
by
contemplating
every
possible
variation
observed
samples
average
load,
delay,
inter‐packet
sending
with
improved
accuracy.
forecast
shows
reduced
error
estimation
when
compared
existing
methods
Mean
Absolute
Percentage
Error
1.64%,
Squared
11.99,
Root
3.46
2.63.
Journal of Cloud Computing Advances Systems and Applications,
Год журнала:
2023,
Номер
12(1)
Опубликована: Июнь 26, 2023
Abstract
Efficient
resource
management
approaches
have
become
a
fundamental
challenge
for
distributed
systems,
especially
dynamic
environment
systems
such
as
cloud
computing
data
centers.
These
aim
at
load-balancing
or
minimizing
power
consumption.
Due
to
the
highly
nature
of
workloads,
traditional
time
series
and
machine
learning
models
fail
achieve
accurate
predictions.
In
this
paper,
we
propose
novel
hybrid
VTGAN
models.
Our
proposed
not
only
predicting
future
workloads
but
also
workload
trend
(i.e.,
upward
downward
direction
workload).
Trend
classification
could
be
less
complex
during
decision-making
process
in
approaches.
Also,
study
effect
changing
sliding
window
size
number
prediction
steps.
addition,
investigate
impact
enhancing
features
used
training
using
technical
indicators,
Fourier
transforms,
wavelet
transforms.
We
validate
our
real
dataset.
results
show
that
outperform
deep
models,
LSTM/GRU
CNN-LSTM/GRU,
concerning
classification.
model
records
an
accuracy
ranging
from
$$95.4\%$$
95.4%
$$96.6\%$$
96.6
.
IEEE Access,
Год журнала:
2024,
Номер
12, С. 55248 - 55263
Опубликована: Янв. 1, 2024
Cloud
computing
has
become
the
cornerstone
of
modern
technology,
propelling
industries
to
unprecedented
heights
with
its
remarkable
and
recent
advances.
However,
fundamental
challenge
for
cloud
service
providers
is
real-time
workload
prediction
management
optimal
resource
allocation.
workloads
are
characterized
by
their
heterogeneous,
unpredictable,
fluctuating
nature,
making
this
task
even
more
challenging.
As
a
result
achievements
deep
learning
(DL)
algorithms
across
diverse
fields,
scholars
have
begun
embrace
approach
addressing
such
challenges.
It
defacto
standard
prediction.
Unfortunately,
DL
been
widely
recognized
vulnerability
adversarial
examples,
which
poses
significant
DL-based
forecasting
models.
In
study,
we
utilize
established
white-box
attack
generation
methods
from
field
computer
vision
construct
examples
four
cutting-edge
regression
models,
including
Recurrent
Neural
Network
(RNN),
Long
Short-Term
Memory
(LSTM),
Gated
Unit
(GRU),
1D
Convolutional
(1D-CNN)
attention-based
We
evaluate
our
study
three
benchmark
datasets:
Google
trace,
Alibaba
Bitbrain.
The
findings
analysis
unequivocally
indicate
that
models
highly
vulnerable
attacks.
To
best
knowledge,
first
conduct
systematic
research
exploring
in
data
center,
highlighting
inherent
hazards
both
security
cost-effectiveness
centers.
By
raising
awareness
these
vulnerabilities,
advocate
urgent
development
robust
defensive
mechanisms
enhance
constantly
evolving
technical
landscape.
IEEE Transactions on Cloud Computing,
Год журнала:
2024,
Номер
12(2), С. 789 - 799
Опубликована: Апрель 1, 2024
Forecasting
workloads
and
responding
promptly
with
resource
scaling
migration
is
critical
to
optimizing
operations
enhancing
management
in
cloud
environments.
However,
the
diverse
dynamic
nature
of
devices
within
environments
complicates
workload
forecasting.
These
challenges
often
lead
service
level
agreement
violations
or
inefficient
usage.
Hence,
this
paper
proposes
an
Enhanced
Long-Term
Cloud
Workload
(E-LCWF)
framework
designed
specifically
for
efficient
these
heterogeneous
The
E-LCWF
processes
individual
as
multivariate
time
series
enhances
model
performance
through
anomaly
detection
handling.
Additionally,
employs
error-based
ensemble
approach,
using
transformer-based
models
Time
Series
(LTSF)
linear
models,
each
which
has
demonstrated
exceptional
LTSF.
Experimental
results
obtained
virtual
machine
data
from
real-world
information
systems
manufacturing
execution
show
that
outperforms
state-of-the-art
forecasting
accuracy.
Journal of King Saud University - Computer and Information Sciences,
Год журнала:
2024,
Номер
36(2), С. 101924 - 101924
Опубликована: Янв. 21, 2024
Edge
computing
has
gained
widespread
adoption
for
time-sensitive
applications
by
offloading
a
portion
of
IoT
system
workloads
from
the
cloud
to
edge
nodes.
However,
limited
resources
devices
hinder
service
deployment,
making
auto-scaling
crucial
improving
resource
utilization
in
response
dynamic
workloads.
Recent
solutions
aim
make
proactive
predicting
future
and
overcoming
limitations
reactive
approaches.
These
often
rely
on
time-series
data
analysis
machine
learning
techniques,
especially
Long
Short-Term
Memory
(LSTM),
thanks
its
accuracy
prediction
speed.
existing
suffer
oscillation
issues,
even
when
using
cooling-down
strategy.
Consequently,
efficiency
depends
model
degree
scaling
actions.
This
paper
proposes
novel
approach
improve
deal
with
issues.
Our
involves
an
automatic
featurization
phase
that
extracts
features
workload
data,
prediction's
accuracy.
extracted
also
serve
as
grid
controlling
generated
experimental
results
demonstrate
effectiveness
our
accuracy,
mitigating
phenomena,
enhancing
overall
performance.
International Journal of Advanced engineering Management and Science,
Год журнала:
2023,
Номер
9(10), С. 09 - 26
Опубликована: Янв. 1, 2023
Deep
Neural
Networks
(DNNs)
are
currently
used
in
a
wide
range
of
critical
real-world
applications
as
machine
learning
technology.
Due
to
the
high
number
parameters
that
make
up
DNNs,
and
prediction
tasks
require
millions
floating-point
operations
(FLOPs).
Implementing
DNNs
into
cloud
computing
system
with
centralized
servers
data
storage
sub-systems
equipped
high-speed
high-performance
capabilities
is
more
effective
strategy.
This
research
presents
an
updated
analysis
most
recent
computing.
It
highlights
necessity
while
presenting
debating
numerous
DNN
complexity
issues
related
various
architectures.
Additionally,
it
goes
their
intricacies
offers
thorough
several
platforms
for
deployment.
examines
already
running
on
highlight
advantages
using
DNNs.
The
study
difficulties
associated
implementing
systems
provides
suggestions
improving
both
current
future
deployments.