Artificial Intelligence Review,
Journal Year:
2024,
Volume and Issue:
57(7)
Published: June 15, 2024
Abstract
In
the
rapidly
evolving
field
of
Deep
Learning
(DL),
trustworthiness
models
is
essential
for
their
effective
application
in
critical
domains
like
healthcare
and
autonomous
systems.
Trustworthiness
DL
encompasses
aspects
such
as
reliability,
fairness,
transparency,
which
are
crucial
its
real-world
impact
acceptance.
However,
development
trustworthy
faces
significant
challenges.
This
notably
due
to
adversarial
examples,
a
sophisticated
form
evasion
attack
machine
learning
(AML),
subtly
alter
inputs
deceive
these
pose
major
threat
safety
reliability.
The
current
body
research
primarily
focuses
on
defensive
measures,
enhancing
robustness
or
implementing
explainable
AI
techniques.
this
approach
often
neglects
address
fundamental
vulnerabilities
that
adversaries
exploit.
As
result,
tends
concentrate
more
counteracting
measures
rather
than
gaining
an
in-depth
understanding
strategies
inherent
gap
comprehensive
impedes
formulation
defense
mechanisms.
aims
shift
focus
from
predominantly
toward
extensive
comprehension
techniques
innate
models.
We
undertake
by
conducting
thorough
systematic
literature
review,
encompassing
49
diverse
studies
previous
decade.
Our
findings
reveal
key
characteristics
examples
enable
success
against
image
classification-based
Building
insights,
we
propose
Transferable
Pretrained
Adversarial
framework
(TPre-ADL).
conceptual
model
rectify
deficiencies
incorporating
analyzed
traits
potentially
Patterns,
Journal Year:
2025,
Volume and Issue:
6(2), P. 101175 - 101175
Published: Feb. 1, 2025
Medical
conditions
and
systemic
diseases
often
manifest
as
distinct
facial
characteristics,
making
identification
of
these
unique
features
crucial
for
disease
screening.
However,
detecting
using
photography
remains
challenging
because
the
wide
variability
in
human
conditions.
The
integration
artificial
intelligence
(AI)
into
analysis
represents
a
promising
frontier
offering
user-friendly,
non-invasive,
cost-effective
screening
approach.
This
review
explores
potential
AI-assisted
identifying
subtle
phenotypes
indicative
health
disorders.
First,
we
outline
technological
framework
essential
effective
implementation
healthcare
settings.
Subsequently,
focus
on
role
We
further
expand
our
examination
to
include
applications
monitoring,
support
treatment
decision-making,
follow-up,
thereby
contributing
comprehensive
management.
Despite
its
promise,
adoption
this
technology
faces
several
challenges,
including
privacy
concerns,
model
accuracy,
issues
with
interpretability,
biases
AI
algorithms,
adherence
regulatory
standards.
Addressing
challenges
is
ensure
fair
ethical
use.
By
overcoming
hurdles,
can
empower
providers,
improve
patient
care
outcomes,
enhance
global
health.
Journal of Magnetic Resonance Imaging,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Jan. 9, 2025
Breast
cancer
continues
to
be
a
major
health
concern,
and
early
detection
is
vital
for
enhancing
survival
rates.
Magnetic
resonance
imaging
(MRI)
key
tool
due
its
substantial
sensitivity
invasive
breast
cancers.
Computer‐aided
(CADe)
systems
enhance
the
effectiveness
of
MRI
by
identifying
potential
lesions,
aiding
radiologists
in
focusing
on
areas
interest,
extracting
quantitative
features,
integrating
with
computer‐aided
diagnosis
(CADx)
pipelines.
This
review
aims
provide
comprehensive
overview
current
state
CADe
MRI,
technical
details
pipelines
segmentation
models
including
classical
intensity‐based
methods,
supervised
unsupervised
machine
learning
(ML)
approaches,
latest
deep
(DL)
architectures.
It
highlights
recent
advancements
from
traditional
algorithms
sophisticated
DL
such
as
U‐Nets,
emphasizing
implementation
multi‐parametric
acquisitions.
Despite
these
advancements,
face
challenges
like
variable
false‐positive
negative
rates,
complexity
interpreting
extensive
data,
variability
system
performance,
lack
large‐scale
studies
multicentric
models,
limiting
generalizability
suitability
clinical
implementation.
Technical
issues,
image
artefacts
need
reproducible
explainable
algorithms,
remain
significant
hurdles.
Future
directions
emphasize
developing
more
robust
generalizable
AI
improve
transparency
trust
among
clinicians,
multi‐purpose
systems,
incorporating
large
language
diagnostic
reporting
patient
management.
Additionally,
efforts
standardize
streamline
protocols
aim
increase
accessibility
reduce
costs,
optimizing
use
practice.
Level
Evidence
NA
Efficacy
Stage
2
Computational and Structural Biotechnology Journal,
Journal Year:
2025,
Volume and Issue:
27, P. 346 - 359
Published: Jan. 1, 2025
The
widespread
adoption
of
Artificial
Intelligence
(AI)
and
machine
learning
(ML)
tools
across
various
domains
has
showcased
their
remarkable
capabilities
performance.
Black-box
AI
models
raise
concerns
about
decision
transparency
user
confidence.
Therefore,
explainable
(XAI)
explainability
techniques
have
rapidly
emerged
in
recent
years.
This
paper
aims
to
review
existing
works
on
bioinformatics,
with
a
particular
focus
omics
imaging.
We
seek
analyze
the
growing
demand
for
XAI
identify
current
approaches,
highlight
limitations.
Our
survey
emphasizes
specific
needs
both
bioinformatics
applications
users
when
developing
methods
we
particularly
imaging
data.
analysis
reveals
significant
driven
by
need
confidence
decision-making
processes.
At
end
survey,
provided
practical
guidelines
system
developers.
Research Square (Research Square),
Journal Year:
2025,
Volume and Issue:
unknown
Published: May 6, 2025
Abstract
The
study
compares
sensitivity/specificity
of
classification
by
pretrained
image
networks
and
traditional
Machine
Learning
(ML)
methods.
One
hundred
seven
spectra
each
benign
skin
conditions
actinic
keratosis
(ACK)
seborrheic
(SEK),
cancer
basal
cell
carcinoma
(BCC)
were
downloaded
from
a
public
database.
Eighty
per
group
used
for
training
twenty-seven
testing.
In
the
first
strategy,
spectrum
intensity
values
as
input
Linear
Discriminant
Analysis
(LDA),
K-Nearest
Neighbors
(KNN),
Decision
Tree
(DT),
TreeBagger,
Ensemble
method,
Naïve
Bayes,
Support
Vector
(SVM),
Artificial
Neural
Network
(ANN).
second
strategy
involved
using
graphs
saved
images
to
train
GoogLeNet,
Places-365
ResNet-50,
Inception-V3,
DenseNet-201,
NasNetMobile.
Strategy
2
yielded
better
–
0.7/
0.91
(ACK),
0.7/0.83
(BCC),
0.63/0.85
(SEK)
compared
1–0.52/0.94
0.7/0.8
0.5/0.8
(SEK).
Grad-CAM
mapping
suggested
that
1100–1200,1350–1450,
1600–1700
1/cm
be
responsible
2.
When
these
regions
plotted
subplots
2,
sensitivity
BCC
increases
0.78.
Results
suggest
classify
may
yield
results,
give
visual
understanding
basis
classification,
provide
means
improve
further.