PLOS Digital Health,
Год журнала:
2024,
Номер
3(11), С. e0000651 - e0000651
Опубликована: Ноя. 7, 2024
Biases
in
medical
artificial
intelligence
(AI)
arise
and
compound
throughout
the
AI
lifecycle.
These
biases
can
have
significant
clinical
consequences,
especially
applications
that
involve
decision-making.
Left
unaddressed,
biased
lead
to
substandard
decisions
perpetuation
exacerbation
of
longstanding
healthcare
disparities.
We
discuss
potential
at
different
stages
development
pipeline
how
they
affect
algorithms
Bias
occur
data
features
labels,
model
evaluation,
deployment,
publication.
Insufficient
sample
sizes
for
certain
patient
groups
result
suboptimal
performance,
algorithm
underestimation,
clinically
unmeaningful
predictions.
Missing
findings
also
produce
behavior,
including
capturable
but
nonrandomly
missing
data,
such
as
diagnosis
codes,
is
not
usually
or
easily
captured,
social
determinants
health.
Expertly
annotated
labels
used
train
supervised
learning
models
may
reflect
implicit
cognitive
care
practices.
Overreliance
on
performance
metrics
during
obscure
bias
diminish
a
model's
utility.
When
applied
outside
training
cohort,
deteriorate
from
previous
validation
do
so
differentially
across
subgroups.
How
end
users
interact
with
deployed
solutions
introduce
bias.
Finally,
where
are
developed
published,
by
whom,
impacts
trajectories
priorities
future
development.
Solutions
mitigate
must
be
implemented
care,
which
include
collection
large
diverse
sets,
statistical
debiasing
methods,
thorough
emphasis
interpretability,
standardized
reporting
transparency
requirements.
Prior
real-world
implementation
settings,
rigorous
through
trials
critical
demonstrate
unbiased
application.
Addressing
crucial
ensuring
all
patients
benefit
equitably
AI.
The
rapid
integration
of
artificial
intelligence
(AI)
into
healthcare
has
raised
concerns
among
professionals
about
the
potential
displacement
human
medical
by
AI
technologies.
However,
apprehensions
and
perspectives
workers
regarding
substitution
them
with
are
unknown.
BMC Medical Informatics and Decision Making,
Год журнала:
2024,
Номер
24(1)
Опубликована: Янв. 24, 2024
Abstract
Prostate
cancer,
the
most
common
cancer
in
men,
is
influenced
by
age,
family
history,
genetics,
and
lifestyle
factors.
Early
detection
of
prostate
using
screening
methods
improves
outcomes,
but
balance
between
overdiagnosis
early
remains
debated.
Using
Deep
Learning
(DL)
algorithms
for
offers
a
promising
solution
accurate
efficient
diagnosis,
particularly
cases
where
imaging
challenging.
In
this
paper,
we
propose
Cancer
Detection
Model
(PCDM)
model
automatic
diagnosis
cancer.
It
proves
its
clinical
applicability
to
aid
management
real-world
healthcare
environments.
The
PCDM
modified
ResNet50-based
architecture
that
integrates
faster
R-CNN
dual
optimizers
improve
performance
process.
trained
on
large
dataset
annotated
medical
images,
experimental
results
show
proposed
outperforms
both
ResNet50
VGG19
architectures.
Specifically,
achieves
high
sensitivity,
specificity,
precision,
accuracy
rates
97.40%,
97.09%,
97.56%,
95.24%,
respectively.
PLOS Digital Health,
Год журнала:
2024,
Номер
3(11), С. e0000651 - e0000651
Опубликована: Ноя. 7, 2024
Biases
in
medical
artificial
intelligence
(AI)
arise
and
compound
throughout
the
AI
lifecycle.
These
biases
can
have
significant
clinical
consequences,
especially
applications
that
involve
decision-making.
Left
unaddressed,
biased
lead
to
substandard
decisions
perpetuation
exacerbation
of
longstanding
healthcare
disparities.
We
discuss
potential
at
different
stages
development
pipeline
how
they
affect
algorithms
Bias
occur
data
features
labels,
model
evaluation,
deployment,
publication.
Insufficient
sample
sizes
for
certain
patient
groups
result
suboptimal
performance,
algorithm
underestimation,
clinically
unmeaningful
predictions.
Missing
findings
also
produce
behavior,
including
capturable
but
nonrandomly
missing
data,
such
as
diagnosis
codes,
is
not
usually
or
easily
captured,
social
determinants
health.
Expertly
annotated
labels
used
train
supervised
learning
models
may
reflect
implicit
cognitive
care
practices.
Overreliance
on
performance
metrics
during
obscure
bias
diminish
a
model's
utility.
When
applied
outside
training
cohort,
deteriorate
from
previous
validation
do
so
differentially
across
subgroups.
How
end
users
interact
with
deployed
solutions
introduce
bias.
Finally,
where
are
developed
published,
by
whom,
impacts
trajectories
priorities
future
development.
Solutions
mitigate
must
be
implemented
care,
which
include
collection
large
diverse
sets,
statistical
debiasing
methods,
thorough
emphasis
interpretability,
standardized
reporting
transparency
requirements.
Prior
real-world
implementation
settings,
rigorous
through
trials
critical
demonstrate
unbiased
application.
Addressing
crucial
ensuring
all
patients
benefit
equitably
AI.