Textual Proficiency and Visual Deficiency: A Comparative Study of Large Language Models and Radiologists in MRI Artifact Detection and Correction
Academic Radiology,
Год журнала:
2025,
Номер
unknown
Опубликована: Фев. 1, 2025
Язык: Английский
Automated MRI Video Analysis for Pediatric Neuro-Oncology: An Experimental Approach
Applied Sciences,
Год журнала:
2024,
Номер
14(18), С. 8323 - 8323
Опубликована: Сен. 15, 2024
Over
the
past
year,
there
has
been
a
significant
rise
in
interest
application
of
open-source
artificial
intelligence
models
(OSAIM)
field
medicine.
An
increasing
number
studies
focus
on
evaluating
capabilities
these
image
analysis,
including
magnetic
resonance
imaging
(MRI).
This
study
aimed
to
investigate
whether
two
most
popular
AI
models,
namely
ChatGPT
4o
and
Gemini
Pro,
can
analyze
MRI
video
sequences
with
single-phase
contrast
sagittal
frontal
projections,
depicting
posterior
fossa
tumor
corresponding
medulloblastoma
child.
The
utilized
files
from
contrast-enhanced
head
planes
(frontal
sagittal)
child
diagnosed
tumor,
type
medulloblastoma,
confirmed
by
histopathological
examination.
Each
model
was
separately
provided
file,
first
plane,
analyzing
three
different
sets
commands
general
specific.
same
procedure
applied
file
plane.
Pro
did
not
conduct
detailed
analysis
pathological
change
but
correctly
identified
content
indicating
it
brain
MRI,
suggested
that
specialist
should
perform
evaluation.
Conversely,
conducted
failed
recognize
MRI.
attempts
detect
lesion
were
random
varied
depending
These
could
accurately
identify
or
indicate
area
neoplastic
change,
even
after
applying
queries.
results
suggest
despite
their
widespread
use
various
fields,
require
further
improvements
specialized
training
effectively
support
medical
diagnostics.
Язык: Английский
Comparative analysis of artificial intelligence-driven assistance in diverse educational queries: ChatGPT vs. Google Bard
Frontiers in Education,
Год журнала:
2024,
Номер
9
Опубликована: Сен. 26, 2024
Artificial
intelligence
tools
are
rapidly
growing
in
education,
highlighting
the
imperative
need
for
a
thorough
and
critical
evaluation
of
their
performance.
To
this
aim,
study
tests
effectiveness
ChatGPT
Google
Bard
answering
range
questions
within
engineering
health
sectors.
True/false,
multiple
choice
(MCQs),
matching,
short
answer,
essay,
calculation
among
question
types
investigated.
Findings
showed
that
4
surpasses
both
3.5
terms
creative
problem-solving
accuracy
across
various
types.
The
highest
achieved
by
was
true/false
questions,
reaching
97.5%,
while
its
least
accurate
performance
noted
with
an
82.5%.
Prompting
to
provide
responses
apparently
prevented
them
from
hallucinating
unrealistic
or
nonsensical
responses.
majority
problems
which
provided
incorrect
answers
demonstrated
correct
approach;
however,
AI
models
struggled
accurately
perform
simple
calculations.
In
MCQs
related
sciences,
seemed
have
challenge
discerning
answer
several
plausible
options.
While
all
three
managed
essay
competently,
avoiding
any
blatantly
(unlike
other
types),
some
nuanced
differences
were
noticed.
consistently
adhered
more
closely
prompts,
providing
straightforward
essential
responses,
superiority
over
adaptability.
ChatGPT4
fabricated
references,
creating
nonexistent
authors
research
titles
response
prompts
sources.
utilizing
education
holds
promise,
even
latest
most
advanced
versions
not
able
questions.
There
remains
significant
human
cognitive
skills
further
advancements
capabilities.
Язык: Английский
Precision of artificial intelligence in paediatric cardiology multimodal image interpretation
Cardiology in the Young,
Год журнала:
2024,
Номер
unknown, С. 1 - 6
Опубликована: Ноя. 11, 2024
Multimodal
imaging
is
crucial
for
diagnosis
and
treatment
in
paediatric
cardiology.
However,
the
proficiency
of
artificial
intelligence
chatbots,
like
ChatGPT-4,
interpreting
these
images
has
not
been
assessed.
This
cross-sectional
study
evaluates
precision
ChatGPT-4
multimodal
cardiology
knowledge
assessment,
including
echocardiograms,
angiograms,
X-rays,
electrocardiograms.
One
hundred
multiple-choice
questions
with
accompanying
from
textbook
Язык: Английский