Indian journal of radiology and imaging - new series/Indian journal of radiology and imaging/Indian Journal of Radiology & Imaging,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 4, 2024
Abstract
Background
Radiology
is
critical
for
diagnosis
and
patient
care,
relying
heavily
on
accurate
image
interpretation.
Recent
advancements
in
artificial
intelligence
(AI)
natural
language
processing
(NLP)
have
raised
interest
the
potential
of
AI
models
to
support
radiologists,
although
robust
research
performance
this
field
still
emerging.
Objective
This
study
aimed
assess
efficacy
ChatGPT-4
answering
radiological
anatomy
questions
similar
those
Fellowship
Royal
College
Radiologists
(FRCR)
Part
1
Anatomy
examination.
Methods
We
used
100
mock
from
a
free
Web
site
patterned
after
FRCR
was
tested
under
two
conditions:
with
without
context
regarding
examination
instructions
question
format.
The
main
query
posed
was:
“Identify
structure
indicated
by
arrow(s).”
Responses
were
evaluated
against
correct
answers,
expert
radiologists
(>5
30
years
experience
radiology
diagnostics
academics)
rated
explanation
answers.
calculated
four
scores:
correctness,
sidedness,
modality
identification,
approximation.
latter
considers
partial
correctness
if
identified
present
but
not
focus
question.
Results
Both
testing
conditions
saw
underperform,
scores
4
7.5%
no
context,
respectively.
However,
it
imaging
100%
accuracy.
model
scored
over
50%
approximation
metric,
where
structures
arrow.
struggled
identifying
side
structure,
scoring
approximately
42
40%
settings,
Only
32%
responses
across
settings.
Conclusion
Despite
its
ability
correctly
recognize
modality,
has
significant
limitations
interpreting
normal
anatomy.
indicates
necessity
enhanced
training
better
interpret
abnormal
images.
Identifying
images
also
remains
challenge
ChatGPT-4.
Research Square (Research Square),
Journal Year:
2024,
Volume and Issue:
unknown
Published: June 26, 2024
Abstract
Objective
This
study
aims
to
explore
the
application
value
of
ChatGPT-4.0
in
ultrasonic
image
analysis
thyroid
nodules,
comparing
its
diagnostic
efficacy
and
consistency
with
that
sonographers.
Methods
is
a
prospective
based
on
real
clinical
scenarios.
The
included
124
patients
nodules
confirmed
by
pathology
who
underwent
ultrasound
examinations
at
Fujian
Medical
University
Affiliated
Second
Hospital.
A
physician
not
involved
collected
images
capturing
three
for
each
nodule—the
maximum
cross-sectional,
longitudinal,
section
best
representing
nodular
characteristics—for
analysis,
classified
according
2020
China
Thyroid
Nodule
Malignancy
Risk
Stratification
Guide
(C-TIRADS).
Two
sonographers
different
qualifications
(a
resident
an
attending
physician)
independently
performed
examinations,
also
classifying
C-TIRADS
guidelines.
Using
fine
needle
aspiration
(FNA)
biopsy
or
surgical
results
as
gold
standard,
were
compared
those
Results
(1)
diagnosed
sensitivity
86.2%,
specificity
60.0%,
AUC
0.731,
comparable
resident's
85.1%,
66.7%,
0.759
(p
>
0.05),
but
lower
than
physician's
97.9%
0.889
<
0.05).
(2)
showed
good
nodule
classification
(Kappa
=
0.729),
pathological
diagnosis
was
between
values
0.457
vs
0.816
respectively).
Conclusion
has
certain
risk
stratification
level
physicians.
BACKGROUND
Effective
doctor-patient
communication
is
essential
in
clinical
practice,
especially
oncology,
where
radiology
reports
play
a
crucial
role.
These
are
often
filled
with
technical
jargon,
making
them
challenging
for
patients
to
understand
and
affecting
their
engagement
decision-making.
Large
Language
Models
(LLMs),
such
as
Generative
Pretrained
Transformer-4
(GPT-4),
offer
novel
approach
simplifying
these
potentially
enhancing
patient
outcomes.
OBJECTIVE
To
assess
the
feasibility
effectiveness
of
using
GPT-4
simplify
oncological
improve
communication.
METHODS
In
retrospective
study
approved
by
Ethics
Review
Committees
multiple
hospitals,
698
malignant
tumors
from
October
December
2023
were
analyzed.
Seventy
selected
develop
templates
scoring
scales
create
simplified
interpretative
(IRRs).
Radiologists
checked
consistency
between
original
(ORRs)
IRRs,
while
middle-aged
volunteers
high
school
education
no
medical
background
assessed
readability.
Doctors
evaluated
efficiency
through
simulated
consultations.
RESULTS
Transforming
ORRs
into
IRRs
resulted
clearer
reports,
word
count
increasing
818.74
1025.82
(P<0.001),
volunteers'
reading
time
decreasing
672.24
seconds
590.39
rate
72.44
words/min
104.62
(P<0.001).
Doctor-patient
significantly
reduced
1117.30
746.84
comprehension
scores
improved
5.49
7.82
CONCLUSIONS
This
demonstrates
significant
potential
LLMs,
specifically
GPT-4,
facilitate
reports.
Simplified
enhance
understanding
interactions,
suggesting
valuable
application
AI
practice
outcomes
healthcare
CLINICALTRIAL
No
application.
Journal of the Korean Society of Radiology,
Journal Year:
2024,
Volume and Issue:
85(5), P. 861 - 861
Published: Jan. 1, 2024
Large
language
models
(LLMs)
have
revolutionized
the
global
landscape
of
technology
beyond
field
natural
processing.
Owing
to
their
extensive
pre-training
using
vast
datasets,
contemporary
LLMs
can
handle
tasks
ranging
from
general
functionalities
domain-specific
areas,
such
as
radiology,
without
need
for
additional
fine-tuning.
Importantly,
are
on
a
trajectory
rapid
evolution,
addressing
challenges
hallucination,
bias
in
training
data,
high
costs,
performance
drift,
and
privacy
issues,
along
with
inclusion
multimodal
inputs.
The
concept
small,
on-premise
open
source
has
garnered
growing
interest,
fine-tuning
medical
domain
knowledge,
efficiency
managing
drift
be
effectively
simultaneously
achieved.
This
review
provides
conceptual
actionable
guidance,
an
overview
current
technological
future
directions
radiologists.
Indian journal of radiology and imaging - new series/Indian journal of radiology and imaging/Indian Journal of Radiology & Imaging,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 4, 2024
Abstract
Background
Radiology
is
critical
for
diagnosis
and
patient
care,
relying
heavily
on
accurate
image
interpretation.
Recent
advancements
in
artificial
intelligence
(AI)
natural
language
processing
(NLP)
have
raised
interest
the
potential
of
AI
models
to
support
radiologists,
although
robust
research
performance
this
field
still
emerging.
Objective
This
study
aimed
assess
efficacy
ChatGPT-4
answering
radiological
anatomy
questions
similar
those
Fellowship
Royal
College
Radiologists
(FRCR)
Part
1
Anatomy
examination.
Methods
We
used
100
mock
from
a
free
Web
site
patterned
after
FRCR
was
tested
under
two
conditions:
with
without
context
regarding
examination
instructions
question
format.
The
main
query
posed
was:
“Identify
structure
indicated
by
arrow(s).”
Responses
were
evaluated
against
correct
answers,
expert
radiologists
(>5
30
years
experience
radiology
diagnostics
academics)
rated
explanation
answers.
calculated
four
scores:
correctness,
sidedness,
modality
identification,
approximation.
latter
considers
partial
correctness
if
identified
present
but
not
focus
question.
Results
Both
testing
conditions
saw
underperform,
scores
4
7.5%
no
context,
respectively.
However,
it
imaging
100%
accuracy.
model
scored
over
50%
approximation
metric,
where
structures
arrow.
struggled
identifying
side
structure,
scoring
approximately
42
40%
settings,
Only
32%
responses
across
settings.
Conclusion
Despite
its
ability
correctly
recognize
modality,
has
significant
limitations
interpreting
normal
anatomy.
indicates
necessity
enhanced
training
better
interpret
abnormal
images.
Identifying
images
also
remains
challenge
ChatGPT-4.