Comparative Accuracy of ChatGPT 4.0 and Google Gemini in Answering Pediatric Radiology Text-Based Questions
Mohammed Abdul Sami,
No information about this author
Abdul Samad,
No information about this author
Keyur Parekh
No information about this author
et al.
Cureus,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Oct. 5, 2024
This
study
evaluates
the
accuracy
of
two
AI
language
models,
ChatGPT
4.0
and
Google
Gemini
(as
August
2024),
in
answering
a
set
79
text-based
pediatric
radiology
questions
from
"Pediatric
Imaging:
A
Core
Review."
Accurate
interpretation
text
images
is
critical
radiology,
making
tools
valuable
medical
education.
Language: Английский
Cultivating diagnostic clarity: The importance of reporting artificial intelligence confidence levels in radiologic diagnoses
Clinical Imaging,
Journal Year:
2024,
Volume and Issue:
117, P. 110356 - 110356
Published: Nov. 13, 2024
Language: Английский
Evaluation of ChatGPT 4.0 in Thoracic Imaging and Diagnostics
Golnaz Lotfian,
No information about this author
Keyur Parekh,
No information about this author
Mohammed Abdul Sami
No information about this author
et al.
Cureus,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 15, 2024
Recent
advancements
in
natural
language
processing
(NLP)
have
profoundly
transformed
the
medical
industry,
enhancing
large
cohort
data
analysis,
improving
diagnostic
capabilities,
and
streamlining
clinical
workflows.
Among
leading
tools
this
domain
is
ChatGPT
4.0
(OpenAI,
San
Francisco,
California,
US),
a
commercial
NLP
model
widely
used
across
various
applications.
This
study
evaluates
performance
of
specifically
thoracic
imaging
by
assessing
its
ability
to
answer
questions
related
field.
We
utilized
respond
multiple-choice
derived
from
scenarios,
followed
rigorous
statistical
analysis
assess
accuracy
variability
different
subgroups.
Our
revealed
significant
Overall,
achieved
an
impressive
84.9%
diagnosing
radiology
questions.
It
excelled
terminology
signs,
achieving
perfect
scores,
demonstrated
strong
intensive
care
normal
anatomy
categories,
with
accuracies
90%
80%,
respectively.
In
pathology
subgroups,
average
89.1%,
particularly
excelling
infectious
pneumonia
atelectasis,
though
it
scored
lower
diffuse
alveolar
disease
(66.7%).
For
disease-related
questions,
mean
was
79.1%,
scores
several
specific
subcategories.
However,
notably
for
vascular
(50%)
lung
cancer
conclusion,
while
shows
potential
conditions,
identified
underscores
necessity
ongoing
research
refinement
transformer
architecture.
will
enhance
reliability
applicability
broader
patient
settings.
Language: Английский
ChatGPT-4 Turbo and Meta’s LLaMA 3.1: A Relative Analysis of Answering Radiology Text-Based Questions
Mohammed Abdul Sami,
No information about this author
Abdul Samad,
No information about this author
Keyur Parekh
No information about this author
et al.
Cureus,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 24, 2024
This
study
aimed
to
compare
the
accuracy
of
two
AI
models
-
OpenAI's
GPT-4
Turbo
(San
Francisco,
CA)
and
Meta's
LLaMA
3.1
(Menlo
Park,
when
answering
a
standardized
set
pediatric
radiology
questions.
The
primary
objective
was
evaluate
overall
each
model,
while
secondary
assess
their
performance
within
subsections.
Language: Английский
A Cross-Sectional Study Comparing Patient Information Guides Generated by ChatGPT and Google Gemini for Common Radiological Procedures
V.J. Phillips,
No information about this author
Nidhi L Rao,
No information about this author
Yashasvi H Sanghvi
No information about this author
et al.
Cureus,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 30, 2024
Artificial
intelligence
(AI)
plays
a
significant
role
in
creating
brochures
on
radiological
procedures
for
patient
education.
Thus,
this
study
aimed
to
evaluate
the
responses
generated
by
ChatGPT
(San
Francisco,
CA:
OpenAI)
and
Google
Gemini
(Mountain
View,
LLC)
abdominal
ultrasound,
CT
scan,
MRI.
Language: Английский