Nature,
Journal Year:
2023,
Volume and Issue:
619(7969), P. 357 - 362
Published: June 7, 2023
Abstract
Physicians
make
critical
time-constrained
decisions
every
day.
Clinical
predictive
models
can
help
physicians
and
administrators
by
forecasting
clinical
operational
events.
Existing
structured
data-based
have
limited
use
in
everyday
practice
owing
to
complexity
data
processing,
as
well
model
development
deployment
1–3
.
Here
we
show
that
unstructured
notes
from
the
electronic
health
record
enable
training
of
language
models,
which
be
used
all-purpose
engines
with
low-resistance
deployment.
Our
approach
leverages
recent
advances
natural
processing
4,5
train
a
large
for
medical
(NYUTron)
subsequently
fine-tune
it
across
wide
range
tasks.
We
evaluated
our
within
system
five
such
tasks:
30-day
all-cause
readmission
prediction,
in-hospital
mortality
comorbidity
index
length
stay
insurance
denial
prediction.
NYUTron
has
an
area
under
curve
(AUC)
78.7–94.9%,
improvement
5.36–14.7%
AUC
compared
traditional
models.
additionally
demonstrate
benefits
pretraining
text,
potential
increasing
generalizability
different
sites
through
fine-tuning
full
prospective,
single-arm
trial.
These
results
using
medicine
read
alongside
provide
guidance
at
point
care.
IEEE Transactions on Neural Networks and Learning Systems,
Journal Year:
2020,
Volume and Issue:
32(11), P. 4793 - 4813
Published: Oct. 21, 2020
Recently,
artificial
intelligence
and
machine
learning
in
general
have
demonstrated
remarkable
performances
many
tasks,
from
image
processing
to
natural
language
processing,
especially
with
the
advent
of
deep
learning.
Along
research
progress,
they
encroached
upon
different
fields
disciplines.
Some
them
require
high
level
accountability
thus
transparency,
for
example
medical
sector.
Explanations
decisions
predictions
are
needed
justify
their
reliability.
This
requires
greater
interpretability,
which
often
means
we
need
understand
mechanism
underlying
algorithms.
Unfortunately,
blackbox
nature
is
still
unresolved,
poorly
understood.
We
provide
a
review
on
interpretabilities
suggested
by
works
categorize
them.
The
categories
show
dimensions
interpretability
research,
approaches
that
"obviously"
interpretable
information
studies
complex
patterns.
By
applying
same
categorization
it
hoped
(1)
clinicians
practitioners
can
subsequently
approach
these
methods
caution,
(2)
insights
into
will
be
born
more
considerations
practices,
(3)
initiatives
push
forward
data-based,
mathematically-
technically-grounded
education
encouraged.
The Journal of Machine Learning for Biomedical Imaging,
Journal Year:
2020,
Volume and Issue:
1(December 2020), P. 1 - 38
Published: Dec. 15, 2020
Across
the
world’s
coronavirus
disease
2019
(COVID-19)
hot
spots,
need
to
streamline
patient
diagnosis
and
management
has
become
more
pressing
than
ever.
As
one
of
main
imaging
tools,
chest
X-rays
(CXRs)
are
common,
fast,
non-invasive,
relatively
cheap,
potentially
bedside
monitor
progression
disease.
This
paper
describes
first
public
COVID-19
image
data
collection
as
well
a
preliminary
exploration
possible
use
cases
for
data.
dataset
currently
contains
hundreds
frontal
view
is
largest
resource
prognostic
data,
making
it
necessary
develop
evaluate
tools
aid
in
treatment
COVID-19.
It
was
manually
aggregated
from
publication
figures
various
web
based
repositories
into
machine
learning
(ML)
friendly
format
with
accompanying
dataloader
code.
We
collected
lateral
imagery
metadata
such
time
since
symptoms,
intensive
care
unit
(ICU)
status,
survival
intubation
or
hospital
location.
present
multiple
predicting
ICU,
survival,
understanding
patient’s
trajectory
during
treatment.
Data
can
be
accessed
here:
https://github.com/ieee8023/covid-chestxray-dataset
Business Process Management Journal,
Journal Year:
2020,
Volume and Issue:
26(7), P. 1893 - 1924
Published: May 12, 2020
Purpose
The
main
purpose
of
our
study
is
to
analyze
the
influence
Artificial
Intelligence
(AI)
on
firm
performance,
notably
by
building
business
value
AI-based
transformation
projects.
This
was
conducted
using
a
four-step
sequential
approach:
(1)
analysis
AI
and
concepts/technologies;
(2)
in-depth
exploration
case
studies
from
great
number
industrial
sectors;
(3)
data
collection
databases
(websites)
solution
providers;
(4)
review
literature
identify
their
impact
performance
organizations
while
highlighting
AI-enabled
projects
within
organizations.
Design/methodology/approach
has
called
theory
IT
capabilities
seize
(at
organizational
process
levels).
research
(responding
question,
making
discussions,
interpretations
comparisons,
formulating
recommendations)
based
500
IBM,
AWS,
Cloudera,
Nvidia,
Conversica,
Universal
Robots
websites,
etc.
Studying
organizations,
more
specifically,
such
organizations’
projects,
required
us
make
an
archival
following
three
steps,
namely
conceptual
phase,
refinement
development
assessment
phase.
Findings
covers
wide
range
technologies,
including
machine
translation,
chatbots
self-learning
algorithms,
all
which
can
allow
individuals
better
understand
environment
act
accordingly.
Organizations
have
been
adopting
technological
innovations
with
view
adapting
or
disrupting
ecosystem
developing
optimizing
strategic
competitive
advantages.
fully
expresses
its
potential
through
ability
optimize
existing
processes
improve
automation,
information
effects,
but
also
detect,
predict
interact
humans.
Thus,
results
highlighted
benefits
in
at
both
(financial,
marketing
administrative)
levels.
By
these
attributes,
can,
therefore,
enhance
transformed
same
showed
that
achieve
only
when
they
use
features/technologies
reconfigure
processes.
Research
limitations/implications
obviously
influences
way
businesses
are
done
today.
Therefore,
practitioners
researchers
need
consider
as
valuable
support
even
pilot
for
new
model.
For
study,
we
adopted
framework
geared
toward
inclusive
comprehensive
approach
so
account
intangible
In
terms
interest,
this
nurtures
scientific
aims
proposing
model
analyzing
time,
filling
associated
gap
literature.
As
managerial
provide
managers
elements
be
reconfigured
added
order
take
advantage
full
AI,
therefore
profitability
investments
some
advantage.
allows
not
single
technology
set/combination
several
different
configurations
various
company’s
areas
because
multiple
key
must
brought
together
ensure
success
AI:
data,
talent
mix,
domain
knowledge,
decisions,
external
partnerships
scalable
infrastructure.
Originality/value
article
analyses
reuse
secondary
deployment
reports
focuses
mainly
indirectly
those
occurring
level.
being
examined
significant
tangible
evidence
about
performance.
More
article,
studies,
exposes
levels,
considering
it
industries.
Future Healthcare Journal,
Journal Year:
2021,
Volume and Issue:
8(2), P. e188 - e194
Published: July 1, 2021
Artificial
intelligence
(AI)
is
a
powerful
and
disruptive
area
of
computer
science,
with
the
potential
to
fundamentally
transform
practice
medicine
delivery
healthcare.
In
this
review
article,
we
outline
recent
breakthroughs
in
application
AI
healthcare,
describe
roadmap
building
effective,
reliable
safe
systems,
discuss
possible
future
direction
augmented
healthcare
systems.
Nature Medicine,
Journal Year:
2020,
Volume and Issue:
26(9), P. 1364 - 1374
Published: Sept. 1, 2020
Abstract
The
CONSORT
2010
statement
provides
minimum
guidelines
for
reporting
randomized
trials.
Its
widespread
use
has
been
instrumental
in
ensuring
transparency
the
evaluation
of
new
interventions.
More
recently,
there
a
growing
recognition
that
interventions
involving
artificial
intelligence
(AI)
need
to
undergo
rigorous,
prospective
demonstrate
impact
on
health
outcomes.
CONSORT-AI
(Consolidated
Standards
Reporting
Trials–Artificial
Intelligence)
extension
is
guideline
clinical
trials
evaluating
with
an
AI
component.
It
was
developed
parallel
its
companion
trial
protocols:
SPIRIT-AI
(Standard
Protocol
Items:
Recommendations
Interventional
Intelligence).
Both
were
through
staged
consensus
process
literature
review
and
expert
consultation
generate
29
candidate
items,
which
assessed
by
international
multi-stakeholder
group
two-stage
Delphi
survey
(103
stakeholders),
agreed
upon
two-day
meeting
(31
stakeholders)
refined
checklist
pilot
(34
participants).
includes
14
items
considered
sufficiently
important
they
should
be
routinely
reported
addition
core
items.
recommends
investigators
provide
clear
descriptions
intervention,
including
instructions
skills
required
use,
setting
intervention
integrated,
handling
inputs
outputs
human–AI
interaction
provision
analysis
error
cases.
will
help
promote
completeness
assist
editors
peer
reviewers,
as
well
general
readership,
understand,
interpret
critically
appraise
quality
design
risk
bias
Genome Medicine,
Journal Year:
2021,
Volume and Issue:
13(1)
Published: Sept. 27, 2021
Abstract
Deep
learning
is
a
subdiscipline
of
artificial
intelligence
that
uses
machine
technique
called
neural
networks
to
extract
patterns
and
make
predictions
from
large
data
sets.
The
increasing
adoption
deep
across
healthcare
domains
together
with
the
availability
highly
characterised
cancer
datasets
has
accelerated
research
into
utility
in
analysis
complex
biology
cancer.
While
early
results
are
promising,
this
rapidly
evolving
field
new
knowledge
emerging
both
learning.
In
review,
we
provide
an
overview
techniques
how
they
being
applied
oncology.
We
focus
on
applications
for
omics
types,
including
genomic,
methylation
transcriptomic
data,
as
well
histopathology-based
genomic
inference,
perspectives
different
types
can
be
integrated
develop
decision
support
tools.
specific
examples
may
diagnosis,
prognosis
treatment
management.
also
assess
current
limitations
challenges
application
precision
oncology,
lack
phenotypically
rich
need
more
explainable
models.
Finally,
conclude
discussion
obstacles
overcome
enable
future
clinical
utilisation
npj Digital Medicine,
Journal Year:
2021,
Volume and Issue:
4(1)
Published: April 7, 2021
Deep
learning
(DL)
has
the
potential
to
transform
medical
diagnostics.
However,
diagnostic
accuracy
of
DL
is
uncertain.
Our
aim
was
evaluate
algorithms
identify
pathology
in
imaging.
Searches
were
conducted
Medline
and
EMBASE
up
January
2020.
We
identified
11,921
studies,
which
503
included
systematic
review.
Eighty-two
studies
ophthalmology,
82
breast
disease
115
respiratory
for
meta-analysis.
Two
hundred
twenty-four
other
specialities
qualitative
Peer-reviewed
that
reported
on
using
imaging
included.
Primary
outcomes
measures
accuracy,
study
design
reporting
standards
literature.
Estimates
pooled
random-effects
In
AUC's
ranged
between
0.933
1
diagnosing
diabetic
retinopathy,
age-related
macular
degeneration
glaucoma
retinal
fundus
photographs
optical
coherence
tomography.
imaging,
0.864
0.937
lung
nodules
or
cancer
chest
X-ray
CT
scan.
For
0.868
0.909
mammogram,
ultrasound,
MRI
digital
tomosynthesis.
Heterogeneity
high
extensive
variation
methodology,
terminology
outcome
noted.
This
can
lead
an
overestimation
There
immediate
need
development
artificial
intelligence-specific
EQUATOR
guidelines,
particularly
STARD,
order
provide
guidance
around
key
issues
this
field.