Automated detection and forecasting of COVID-19 using deep learning techniques: A review
Neurocomputing,
Journal Year:
2024,
Volume and Issue:
577, P. 127317 - 127317
Published: Jan. 26, 2024
Language: Английский
TeutongNet: A Fine-Tuned Deep Learning Model for Improved Forest Fire Detection
Ghazi Mauer Idroes,
No information about this author
Aga Maulana,
No information about this author
Rivansyah Suhendra
No information about this author
et al.
Leuser Journal of Environmental Studies,
Journal Year:
2023,
Volume and Issue:
1(1), P. 1 - 8
Published: June 22, 2023
Forest
fires
have
emerged
as
a
significant
threat
to
the
environment,
wildlife,
and
human
lives,
necessitating
development
of
effective
early
detection
systems
for
firefighting
mitigation
efforts.
In
this
study,
we
introduce
TeutongNet,
modified
ResNet50V2
model
designed
detect
forest
accurately.
The
is
trained
on
curated
dataset
evaluated
using
various
metrics.
Results
show
that
TeutongNet
achieves
high
accuracy
(98.68%)
with
low
false
positive
negative
rates.
model's
performance
further
supported
by
ROC
curve
analysis,
which
indicates
degree
in
classifying
fire
non-fire
images.
demonstrates
its
effectiveness
reliable
detection,
providing
valuable
insights
improved
management
strategies.
Language: Английский
Enhanced PRIM recognition using PRI sound and deep learning techniques
PLoS ONE,
Journal Year:
2024,
Volume and Issue:
19(5), P. e0298373 - e0298373
Published: May 1, 2024
Pulse
repetition
interval
modulation
(PRIM)
is
integral
to
radar
identification
in
modern
electronic
support
measure
(ESM)
and
intelligence
(ELINT)
systems.
Various
distortions,
including
missing
pulses,
spurious
unintended
jitters,
noise
from
antenna
scans,
often
hinder
the
accurate
recognition
of
PRIM.
This
research
introduces
a
novel
three-stage
approach
for
PRIM
recognition,
emphasizing
innovative
use
PRI
sound.
A
transfer
learning-aided
deep
convolutional
neural
network
(DCNN)
initially
used
feature
extraction.
followed
by
an
extreme
learning
machine
(ELM)
real-time
classification.
Finally,
gray
wolf
optimizer
(GWO)
refines
network's
robustness.
To
evaluate
proposed
method,
we
develop
real
experimental
dataset
consisting
sound
six
common
patterns.
We
utilized
eight
pre-trained
DCNN
architectures
evaluation,
with
VGG16
ResNet50V2
notably
achieving
accuracies
97.53%
96.92%.
Integrating
ELM
GWO
further
optimized
accuracy
rates
98.80%
97.58.
advances
offering
enhanced
method
potential
address
real-world
distortions
ESM
ELINT
Language: Английский
Advancing Virtual Interviews: AI-Driven Facial Emotion Recognition for Better Recruitment
Rohini Mehta,
No information about this author
Pulicharla Sai Pravalika,
No information about this author
Bellamkonda Venkata Naga Durga Sai
No information about this author
et al.
International Journal of Innovative Science and Research Technology (IJISRT),
Journal Year:
2024,
Volume and Issue:
unknown, P. 2288 - 2296
Published: Aug. 8, 2024
Behavior
analysis
involves
the
detailed
process
of
identifying,
modeling,
and
comprehending
various
nuances
patterns
emotional
expressions
exhibited
by
individuals.
It
poses
a
significant
challenge
to
accurately
detect
predict
facial
emotions,
especially
in
contexts
like
remote
interviews,
which
have
become
increasingly
prevalent.
Notably,
many
participants
struggle
convey
their
thoughts
interviewers
with
happy
expression
good
posture,
may
unfairly
diminish
chances
employment,
despite
qualifications.
To
address
this
challenge,
artificial
intelligence
techniques
such
as
image
classification
offer
promising
solutions.
By
leveraging
AI
models,
behavior
can
be
applied
perceive
interpret
reactions,
thereby
paving
way
anticipate
future
behaviors
based
on
learned
participants.
Despite
existing
works
emotion
recognition
(FER)
using
classification,
there
is
limited
research
focused
platforms
interviews
online
courses.
In
paper,
our
primary
focus
lies
emotions
happiness,
sadness,
anger,
surprise,
eye
contact,
neutrality,
smile,
confusion,
stooped
posture.
We
curated
dataset,
comprising
diverse
range
sample
captured
through
participants'
video
recordings
other
images
documenting
speech
during
interviews.
Additionally,
we
integrated
datasets
FER
2013
Celebrity
Emotions
dataset.
Through
investigation,
explore
variety
deep
learning
methodologies,
including
VGG19,
ResNet50V2,
ResNet152V2,
Inception-ResNetV2,
Xception,
EfficientNet
B0,
YOLO
V8
analyze
emotions.
Our
results
demonstrate
an
accuracy
73%
v8
model.
However,
discovered
that
categories
well
surprised
confused,
are
not
disjoint,
leading
potential
inaccuracies
classification.
Furthermore,
considered
posture
non-essential
class
since
conducted
via
webcam,
does
allow
for
observation
removing
these
overlapping
categories,
achieved
remarkable
increase
around
76.88%
Language: Английский
Optimizing Coronary Artery Disease Detection Using a New Triple Concatenated Convolution Neural Network
Slamet Riyadi,
No information about this author
Febriyanti Azahra Abidin,
No information about this author
Cahya Damarjati
No information about this author
et al.
Ingénierie des systèmes d information,
Journal Year:
2024,
Volume and Issue:
29(4), P. 1581 - 1589
Published: Aug. 21, 2024
Coronary
artery
disease
(CAD)
is
a
pathological
condition
that
often
fatal
and
the
main
cause
of
death
throughout
world.Early
detection
this
very
important
to
avoid
severe
complications
such
as
heart
attacks
sudden
death.This
study
employs
artificial
intelligence,
specifically
deep
learning
via
Convolutional
Neural
Networks
(CNNs),
enhance
CAD
detection.While
CNN
architectures
like
ResNet50V2
MobileNetV2
exhibit
satisfactory
performance
individually,
they
possess
distinct
strengths
weaknesses.ResNet50V2
requires
significant
computing
resources,
hindering
its
scalability,
while
struggles
with
extracting
complex
features
from
medical
images.Therefore,
research
aims
combine
EfficientNetV2B0,
ResNet50V2,
using
transfer
techniques
detection.The
methodology
involves
leveraging
pre-trained
models
fine-tuning
them
on
coronary
dataset.Modified
models,
particularly
EfficientNetV2B0
MobileNetV2,
achieve
high
accuracies
94%
86%,
respectively,
yields
72%.However,
combining
boosts
accuracy
95%,
addressing
individual
model
limitations.The
concatenated
demonstrates
superior
predictive
capabilities,
more
accurate
predictions
fewer
errors
than
models.
Language: Английский