IEEE Internet of Things Journal,
Год журнала:
2024,
Номер
11(12), С. 21895 - 21903
Опубликована: Март 18, 2024
The
performance
of
neural
networks
is
directly
affected
by
the
features
obtained
from
backbones
fault
diagnosis
networks.
To
obtain
clear
and
improve
networks,
this
paper
constructs
a
new
block
based
on
linear
transformation.
Firstly,
feature
vector
divided
into
decisive
component
an
invalid
component.
Then,
it
worth
noting
that
orthogonality
these
two
components
beneficial
to
model
learning.
According
this,
are
extracted
using
spaces
constructed
relationships
between
four
fundamental
sub-spaces
matrix.
In
sub-spaces,
row
space
null
employed
extract
useless
component,
respectively.
Both
implemented
layers
designed
as
encoder-decoder
structure
ensure
existence
space.
spaces,
constraint
term
proposed
modify
their
weights.
Lastly,
cosine
similarity
input
entirely.
When
incorporating
some
classic
classifying
they
can
achieve
improved
accuracy.
Moreover,
when
comparing
conventional
spatial
attention
mechanisms,
module
demonstrates
superior
overall
performance,
including
accuracy,
antinoise
ability,
generalization
ability.
Sensors,
Год журнала:
2025,
Номер
25(3), С. 810 - 810
Опубликована: Янв. 29, 2025
Fault
diagnosis
in
modern
industrial
and
information
systems
is
critical
for
ensuring
equipment
reliability
operational
safety,
but
traditional
methods
have
difficulty
effectively
capturing
spatiotemporal
dependencies
fault-sensitive
features
multi-sensor
data,
especially
rarely
considering
dynamic
between
data.
To
address
these
challenges,
this
study
proposes
DyGAT-FTNet,
a
novel
graph
neural
network
model
tailored
to
fault
detection.
The
dynamically
constructs
association
graphs
through
learnable
construction
mechanism,
enabling
automatic
adjacency
matrix
generation
based
on
time–frequency
derived
from
the
short-time
Fourier
transform
(STFT).
Additionally,
attention
(DyGAT)
enhances
extraction
of
by
assigning
node
weights.
pooling
layer
further
aggregates
optimizes
feature
representation.Experimental
evaluations
two
benchmark
detection
datasets,
XJTUSuprgear
dataset
SEU
dataset,
show
that
DyGAT-FTNet
significantly
outperformed
existing
classification
accuracy,
with
accuracies
1.0000
0.9995,
respectively,
highlighting
its
potential
practical
applications.