Journal of Innovation and Technology,
Год журнала:
2024,
Номер
2024(1)
Опубликована: Дек. 1, 2024
The
mistrust
of
AI
seen
in
the
media,
industry
and
education
reflects
deep-seated
cultural
anxieties,
often
comparable
to
societal
prejudices
like
racism
sexism.
Throughout
history,
literature
media
have
portrayed
machines
as
antagonists,
amplifying
fears
technological
obsolescence
identity
loss.
Despite
recent
remarkable
advancements
AI—particularly
creative
decision-making
capacities—human
resistance
its
adoption
persists,
rooted
a
combination
technophobia,
algorithm
aversion,
narratives
dystopia.
This
review
investigates
origins
this
prejudice,
focusing
on
parallels
between
current
attitudes
toward
historical
new
technologies.
Drawing
examples
from
popular
research,
article
reveals
how
AI,
despite
outperforming
humans
some
tasks,
is
undervalued
due
bias.
evidence
shows
that
tool
can
significantly
augment
human
creativity
productivity,
yet
these
benefits
are
frequently
undermined
by
persistent
skepticism.
argues
prejudice
represents
critical
barrier
full
realization
potential
generative
technology
calls
for
reexamination
human-AI
collaboration,
emphasizing
importance
addressing
biases
both
culturally
within
educational
professional
frameworks.
PLOS Digital Health,
Год журнала:
2024,
Номер
3(11), С. e0000651 - e0000651
Опубликована: Ноя. 7, 2024
Biases
in
medical
artificial
intelligence
(AI)
arise
and
compound
throughout
the
AI
lifecycle.
These
biases
can
have
significant
clinical
consequences,
especially
applications
that
involve
decision-making.
Left
unaddressed,
biased
lead
to
substandard
decisions
perpetuation
exacerbation
of
longstanding
healthcare
disparities.
We
discuss
potential
at
different
stages
development
pipeline
how
they
affect
algorithms
Bias
occur
data
features
labels,
model
evaluation,
deployment,
publication.
Insufficient
sample
sizes
for
certain
patient
groups
result
suboptimal
performance,
algorithm
underestimation,
clinically
unmeaningful
predictions.
Missing
findings
also
produce
behavior,
including
capturable
but
nonrandomly
missing
data,
such
as
diagnosis
codes,
is
not
usually
or
easily
captured,
social
determinants
health.
Expertly
annotated
labels
used
train
supervised
learning
models
may
reflect
implicit
cognitive
care
practices.
Overreliance
on
performance
metrics
during
obscure
bias
diminish
a
model's
utility.
When
applied
outside
training
cohort,
deteriorate
from
previous
validation
do
so
differentially
across
subgroups.
How
end
users
interact
with
deployed
solutions
introduce
bias.
Finally,
where
are
developed
published,
by
whom,
impacts
trajectories
priorities
future
development.
Solutions
mitigate
must
be
implemented
care,
which
include
collection
large
diverse
sets,
statistical
debiasing
methods,
thorough
emphasis
interpretability,
standardized
reporting
transparency
requirements.
Prior
real-world
implementation
settings,
rigorous
through
trials
critical
demonstrate
unbiased
application.
Addressing
crucial
ensuring
all
patients
benefit
equitably
AI.
Pulmonary
embolism
(PE)
is
a
clinically
challenging
diagnosis
that
varies
from
silent
to
life-threatening
symptoms.
Timely
of
the
condition
subject
clinical
assessment,
D-dimer
testing
and
radiological
imaging.
Computed
tomography
pulmonary
angiogram
(CTPA)
considered
gold
standard
imaging
modality,
although
some
cases
can
be
missed
due
reader
dependency,
resulting
in
adverse
patient
outcomes.
Hence,
it
crucial
implement
faster
precise
diagnostic
strategies
help
clinicians
diagnose
treat
PE
patients
promptly
mitigate
morbidity
mortality.
Machine
learning
(ML)
artificial
intelligence
(AI)
are
newly
emerging
tools
medical
field,
including
imaging,
potentially
improving
efficacy.
Our
review
studies
showed
computer-aided
design
(CAD)
AI
displayed
similar
superior
sensitivity
specificity
identifying
on
CTPA
as
compared
radiologists.
Several
demonstrated
potential
minor
scans
showing
promising
ability
aid
reducing
substantially.
However,
imperative
sophisticated
conduct
large
trials
integrate
use
everyday
setting
establish
guidelines
for
its
ethical
applicability.
ML
also
physicians
formulating
individualized
management
enhance
Journal of Medical Radiation Sciences,
Год журнала:
2025,
Номер
unknown
Опубликована: Янв. 23, 2025
ABSTRACT
Introduction
Non‐small
cell
lung
cancer
(NSCLC)
is
the
leading
cause
of
cancer‐related
mortality
worldwide.
Despite
advancements
in
early
detection
and
treatment,
postsurgical
recurrence
remains
a
significant
challenge,
occurring
30%–55%
patients
within
5
years
after
surgery.
This
review
analysed
existing
studies
on
utilisation
artificial
intelligence
(AI),
incorporating
CT,
PET,
clinical
data,
for
predicting
risk
early‐stage
NSCLCs.
Methods
A
literature
search
was
conducted
across
multiple
databases,
focusing
published
between
2018
2024
that
employed
radiomics,
machine
learning,
deep
learning
based
preoperative
positron
emission
tomography
(PET),
computed
(CT),
PET/CT,
with
or
without
data
integration.
Sixteen
met
inclusion
criteria
were
assessed
methodological
quality
using
METhodological
RadiomICs
Score
(METRICS).
Results
The
reviewed
demonstrated
potential
radiomics
AI
models
postoperative
risk.
Various
approaches
showed
promising
results,
including
handcrafted
features,
models,
multimodal
combining
different
imaging
modalities
data.
However,
several
challenges
limitations
identified,
such
as
small
sample
sizes,
lack
external
validation,
interpretability
issues,
need
effective
techniques.
Conclusions
Future
research
should
focus
conducting
larger,
prospective,
multicentre
studies,
improving
integration
interpretability,
enhancing
fusion
modalities,
assessing
utility,
standardising
methodologies,
fostering
collaboration
among
researchers
institutions.
Addressing
these
aspects
will
advance
development
robust
generalizable
NSCLC,
ultimately
patient
care
outcomes.
Advances in computational intelligence and robotics book series,
Год журнала:
2025,
Номер
unknown, С. 473 - 504
Опубликована: Янв. 10, 2025
The
utilization
of
the
wearable
devices
(WDs)
that
are
enhanced
by
artificial
intelligence
(AI)
can
have
a
notable
potential
in
healthcare.
This
chapter
aimed
to
provide
an
overview
applications
AI-driven
WDs
enhancing
early
detection
and
management
virus
infections.
First,
we
presented
examples
highlight
capabilities
very
monitoring
infections
such
as
COVID-19.
In
addition,
provided
on
utility
machine
learning
algorithms
analyze
large
data
for
signs
We
also
overviewed
enable
real-time
surveillance
effective
outbreak
management.
showed
how
this
be
achieved
via
collection
analysis
diverse
WDs'
across
various
populations.
Finally,
discussed
challenges
ethical
issues
comes
with
virology
diagnostics,
including
concerns
about
privacy
security
well
issue
equitable
access.
Review of Accounting and Finance,
Год журнала:
2025,
Номер
unknown
Опубликована: Март 10, 2025
Purpose
This
study
aims
to
explore
current
approaches,
challenges
and
practical
lessons
in
auditing
artificial
intelligence
(AI)
systems
for
bias,
focusing
on
legal
compliance
audits
the
USA
European
Union
(EU).
emphasizes
need
standardized
methodologies
ensure
trustworthy
AI
that
align
with
ethical
regulatory
expectations.
Design/methodology/approach
A
qualitative
analysis
compared
bias
audit
practices,
including
US
report
summaries
under
New
York
City’s
Local
Law
144
conformity
assessments
(CAs)
required
by
EU
Act.
Data
was
gathered
from
publicly
available
reports
guidelines
identify
key
lessons.
Findings
The
findings
revealed
are
susceptible
various
biases
stemming
data,
algorithms
human
oversight.
Although
valuable,
lack
standardization,
leading
inconsistent
reporting
practices.
EU’s
risk-based
CA
approach
offers
a
comprehensive
framework;
however,
its
effectiveness
depends
developing
standards
consistent
application.
Research
limitations/implications
is
limited
early
implementation
stage
of
frameworks,
particularly
Act,
restricted
access
reports.
geographic
focus
jurisdictions
may
limit
generalizability
findings.
availability
constraints
frameworks
affect
comparative
analysis.
Future
research
should
longitudinal
studies
effectiveness,
development
intersectional
assessment
investigation
automated
tools
can
adapt
emerging
technologies
while
maintaining
feasibility
across
different
organizational
contexts.
Practical
implications
underscores
necessity
adopting
socio-technical
perspectives
auditing.
It
provides
actionable
insights
firms,
regulators
auditors
into
implementing
robust
governance
risk
practices
mitigate
biases.
Social
Effective
algorithmic
fairness
prevent
discriminatory
outcomes
critical
domains
like
employment,
health
care
financial
services.
emphasize
enhanced
stakeholder
engagement
community
representation
processes.
Implementing
help
close
socioeconomic
gaps
identifying
mitigating
disproportionately
affecting
marginalized
groups.
contributes
equitable
respect
diversity
promote
social
justice
technological
advancement.
Originality/value
discourse
comparing
two
CAs
implementation.
highlights
role
standardization
advancing
finance
accounting