Crucial Role of Understanding in Human-Artificial Intelligence Interaction for Successful Clinical Adoption
Korean Journal of Radiology,
Journal Year:
2025,
Volume and Issue:
26
Published: Jan. 1, 2025
Language: Английский
Reflections on 2024 and Perspectives for 2025 for KJR
Korean Journal of Radiology,
Journal Year:
2025,
Volume and Issue:
26(1), P. 1 - 1
Published: Jan. 1, 2025
Language: Английский
Editor’s Note 2024: The Year in Review for Radiology
Radiology,
Journal Year:
2025,
Volume and Issue:
314(3)
Published: March 1, 2025
Language: Английский
The generative revolution: AI foundation models in geospatial health—applications, challenges and future research
International Journal of Health Geographics,
Journal Year:
2025,
Volume and Issue:
24(1)
Published: April 2, 2025
Language: Английский
Conversion of Mixed-Language Free-Text CT Reports of Pancreatic Cancer to National Comprehensive Cancer Network Structured Reporting Templates by Using GPT-4
Korean Journal of Radiology,
Journal Year:
2025,
Volume and Issue:
26
Published: Jan. 1, 2025
To
evaluate
the
feasibility
of
generative
pre-trained
transformer-4
(GPT-4)
in
generating
structured
reports
(SRs)
from
mixed-language
(English
and
Korean)
narrative-style
CT
for
pancreatic
ductal
adenocarcinoma
(PDAC)
to
assess
its
accuracy
categorizing
PDCA
resectability.
This
retrospective
study
included
consecutive
free-text
pancreas-protocol
staging
PDAC,
two
institutions,
written
English
or
Korean
January
2021
December
2023.
Both
GPT-4
Turbo
GPT-4o
models
were
provided
prompts
along
with
via
an
application
programming
interface
tasked
SRs
tumor
resectability
according
National
Comprehensive
Cancer
Network
guidelines
version
2.2024.
Prompts
optimized
using
model
50
Institution
B.
The
performances
tasks
evaluated
115
A.
Results
compared
a
reference
standard
that
was
manually
derived
by
abdominal
radiologist.
Each
report
consecutively
processed
three
times,
most
frequent
response
selected
as
final
output.
Error
analysis
guided
decision
rationale
models.
Of
narrative
tested,
96
(83.5%)
contained
both
Korean.
For
SR
generation,
demonstrated
comparable
accuracies
(92.3%
[1592/1725]
92.2%
[1590/1725],
respectively;
P
=
0.923).
In
categorization,
showed
higher
than
(81.7%
[94/115]
vs.
67.0%
[77/115],
0.002).
error
Turbo,
generation
rate
7.7%
(133/1725
items),
which
primarily
attributed
inaccurate
data
extraction
(54.1%
[72/133]).
categorization
18.3%
(21/115),
main
cause
being
violation
criteria
(61.9%
[13/21]).
acceptable
NCCN-based
on
PDACs
reports.
However,
oversight
human
radiologists
is
essential
determining
based
findings.
Language: Английский
Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations
Korean Journal of Radiology,
Journal Year:
2025,
Volume and Issue:
26
Published: Jan. 1, 2025
Despite
the
potential
of
large
language
models
for
radiology
training,
their
ability
to
handle
image-based
radiological
questions
remains
poorly
understood.
This
study
aimed
evaluate
performance
GPT-4
Turbo
and
GPT-4o
in
resident
examinations,
analyze
differences
across
question
types,
compare
results
with
those
residents
at
different
levels.
A
total
776
multiple-choice
from
Korean
Society
Radiology
In-Training
Examinations
were
used,
forming
two
sets:
one
originally
written
other
translated
into
English.
We
evaluated
(gpt-4-turbo-2024-04-09)
(gpt-4o-2024-11-20)
on
these
temperature
set
zero,
determining
accuracy
based
majority
vote
five
independent
trials.
analyzed
using
type
(text-only
vs.
image-based)
benchmarked
them
against
nationwide
residents'
performance.
The
impact
input
(Korean
or
English)
model
was
examined.
outperformed
both
(48.2%
41.8%,
P
=
0.002)
text-only
(77.9%
69.0%,
0.031).
On
questions,
showed
comparable
that
1st-year
(41.8%
48.2%,
respectively,
43.3%,
0.608
0.079,
respectively)
but
lower
than
2nd-
4th-year
(vs.
56.0%-63.9%,
all
≤
0.005).
For
performed
better
years
(69.0%
77.9%,
44.7%-57.5%,
0.039).
Performance
English-
Korean-version
no
significant
either
(all
≥
0.275).
types.
models'
matched
higher-year
residents.
Both
demonstrated
superior
compared
questions.
consistent
performances
English
inputs.
Language: Английский
Assessing the Guidelines on the Use of Generative Artificial Intelligence Tools in Universities: A Survey of the World’s Top 50 Universities
Big Data and Cognitive Computing,
Journal Year:
2024,
Volume and Issue:
8(12), P. 194 - 194
Published: Dec. 18, 2024
The
widespread
adoption
of
Generative
Artificial
Intelligence
(GenAI)
tools
in
higher
education
has
necessitated
the
development
appropriate
and
ethical
usage
guidelines.
This
study
aims
to
explore
assess
publicly
available
guidelines
covering
use
GenAI
universities,
following
a
predefined
checklist.
We
searched
downloaded
accessible
on
from
websites
top
50
universities
globally,
according
2025
QS
university
rankings.
From
literature
guidelines,
we
created
24-item
checklist,
which
was
then
reviewed
by
panel
experts.
checklist
used
characteristics
retrieved
Out
explored,
were
sites
41
institutions.
All
these
allowed
for
academic
settings
provided
that
specific
instructions
detailed
followed.
These
encompassed
securing
instructor
consent
before
utilization,
identifying
inappropriate
instances
deployment,
employing
suitable
strategies
classroom
assessment,
appropriately
integrating
results,
acknowledging
crediting
tools,
adhering
data
privacy
security
measures.
However,
our
found
only
small
number
offered
AI
algorithm
(understanding
how
it
works),
documentation
prompts
outputs,
detection
mechanisms
reporting
misconduct.
Higher
institutions
should
develop
comprehensive
policies
responsible
tools.
must
be
frequently
updated
stay
line
with
fast-paced
evolution
technologies
their
applications
within
sphere.
Language: Английский