Medical Teacher,
Journal Year:
2025,
Volume and Issue:
unknown, P. 1 - 3
Published: March 3, 2025
Generative
Artificial
Intelligence
(GenAI)
has
rapidly
emerged
as
a
potentially
transformative
tool
in
education.
Faculty
development
(FD)
programs,
particularly
curriculum
(CD),
are
ideal
settings
for
incorporating
GenAI
to
benefit
faculty
and
their
learners.
However,
concerns
about
accuracy,
bias,
ethical
implications
necessitate
structured
responsible
integration.
We
incorporated
across
five
CD
programs
at
Johns
Hopkins
University
(JHU)
2023-2024.
developed
exercises
using
customizable
prompts
aligned
with
each
step
of
the
Six-Step
Approach
Curriculum
Development
Medical
Education
encouraged
learners
critically
engage
during
required
assignments.
Structured
experimentation,
critical
evaluation,
innovation.
Participants
reported
increased
efficiency
creativity.
Role
modeling,
balanced
messages
GenAI's
capabilities
limitations,
multidisciplinary
teamwork
were
key
enablers
success.
This
pilot
offers
an
example
integration
into
existing
FD
without
requiring
additional
time
or
sacrificing
rigor
processes.
By
sharing
our
findings
globally,
we
hope
democratize
contribute
responsible,
scalable
adoption
diverse
educational
contexts.
Global Medical Education,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Jan. 13, 2025
Abstract
Objectives
Artificial
intelligence
(AI)
is
being
increasingly
used
in
medical
education.
This
narrative
review
presents
a
comprehensive
analysis
of
generative
AI
tools’
performance
answering
and
generating
exam
questions,
thereby
providing
broader
perspective
on
AI’s
strengths
limitations
the
education
context.
Methods
The
Scopus
database
was
searched
for
studies
examinations
from
2022
to
2024.
Duplicates
were
removed,
relevant
full
texts
retrieved
following
inclusion
exclusion
criteria.
Narrative
descriptive
statistics
analyze
contents
included
studies.
Results
A
total
70
analysis.
results
showed
that
varied
when
different
types
questions
specialty
with
best
average
accuracy
psychiatry,
influenced
by
prompts.
With
well-crafted
prompts,
models
can
efficiently
produce
high-quality
examination
questions.
Conclusion
Generative
possesses
ability
answer
using
carefully
designed
Its
potential
use
assessment
vast,
ranging
detecting
question
error,
aiding
preparation,
facilitating
formative
assessments,
supporting
personalized
learning.
However,
it’s
crucial
educators
always
double-check
responses
maintain
prevent
spread
misinformation.
JMIR Formative Research,
Journal Year:
2025,
Volume and Issue:
9, P. e66478 - e66478
Published: Jan. 31, 2025
Abstract
Background
Case
studies
have
shown
ChatGPT
can
run
clinical
simulations
at
the
medical
student
level.
However,
no
data
assessed
ChatGPT’s
reliability
in
meeting
desired
simulation
criteria
such
as
accuracy,
formatting,
and
robust
feedback
mechanisms.
Objective
This
study
aims
to
quantify
ability
consistently
follow
formatting
instructions
create
for
preclinical
learners
according
principles
of
multimedia
educational
technology.
Methods
Using
ChatGPT-4
a
prevalidated
starting
prompt,
authors
ran
360
separate
an
acute
asthma
exacerbation.
A
total
180
were
given
correct
answers
incorrect
answers.
was
evaluated
its
adhere
basic
parameters
(stepwise
progression,
free
response,
interactivity),
advanced
(autonomous
conclusion,
delayed
feedback,
comprehensive
feedback),
accuracy
(vignette,
treatment
updates,
feedback).
Significance
determined
with
χ
²
analyses
using
95%
CIs
odds
ratios.
Results
In
total,
100%
(n=360)
met
medically
accurate.
For
parameters,
55%
(200/360)
all
while
Correct
arm
(157/180,
87%)
significantly
more
than
Incorrect
(43/180,
24%;
P
<.001).
79%
(285/360)
concluded
autonomously,
there
difference
between
arms
autonomous
conclusion
(146/180,
81%
139/180,
77%;
=.36).
Overall,
78%
(282/360)
gave
(137/180,
76%
145/180,
81%;
=.31).
not
likely
conclude
autonomously
(
=.34)
provide
=.27)
when
compared
delayed.
Conclusions
These
potential
be
reliable
tool
simple
by
novel
9-part
metric.
Per
this
metric,
performed
perfectly
on
parameters.
It
well
conclusion.
Delayed
depended
user
inputs.
one
parameter
meet
Further
work
must
done
ensure
consistent
performance
across
broader
range
scenarios.
Internal Medicine Journal,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Feb. 21, 2025
Abstract
Recent
studies
challenge
the
assumption
that
human–artificial
intelligence
(AI)
collaboration
is
universally
optimal,
highlighting
tasks
where
AI
alone
outperforms
combined
efforts.
This
viewpoint
discusses
reasons
behind
these
findings,
explores
influences
on
synergy
and
emphasises
importance
of
identifying
when
clinicians
add
net
benefit
to
performance.
Maximising
patient
outcomes
may
require
accepting
autonomy
in
certain
scenarios
within
healthcare
practice.
Medical Teacher,
Journal Year:
2025,
Volume and Issue:
unknown, P. 1 - 3
Published: March 3, 2025
Generative
Artificial
Intelligence
(GenAI)
has
rapidly
emerged
as
a
potentially
transformative
tool
in
education.
Faculty
development
(FD)
programs,
particularly
curriculum
(CD),
are
ideal
settings
for
incorporating
GenAI
to
benefit
faculty
and
their
learners.
However,
concerns
about
accuracy,
bias,
ethical
implications
necessitate
structured
responsible
integration.
We
incorporated
across
five
CD
programs
at
Johns
Hopkins
University
(JHU)
2023-2024.
developed
exercises
using
customizable
prompts
aligned
with
each
step
of
the
Six-Step
Approach
Curriculum
Development
Medical
Education
encouraged
learners
critically
engage
during
required
assignments.
Structured
experimentation,
critical
evaluation,
innovation.
Participants
reported
increased
efficiency
creativity.
Role
modeling,
balanced
messages
GenAI's
capabilities
limitations,
multidisciplinary
teamwork
were
key
enablers
success.
This
pilot
offers
an
example
integration
into
existing
FD
without
requiring
additional
time
or
sacrificing
rigor
processes.
By
sharing
our
findings
globally,
we
hope
democratize
contribute
responsible,
scalable
adoption
diverse
educational
contexts.