Journal of Medical Systems,
Год журнала:
2025,
Номер
49(1)
Опубликована: Янв. 16, 2025
Generative
Artificial
Intelligence
(Gen
AI)
has
transformative
potential
in
healthcare
to
enhance
patient
care,
personalize
treatment
options,
train
professionals,
and
advance
medical
research.
This
paper
examines
various
clinical
non-clinical
applications
of
Gen
AI.
In
settings,
AI
supports
the
creation
customized
plans,
generation
synthetic
data,
analysis
images,
nursing
workflow
management,
risk
prediction,
pandemic
preparedness,
population
health
management.
By
automating
administrative
tasks
such
as
documentations,
reduce
clinician
burnout,
freeing
more
time
for
direct
care.
Furthermore,
application
may
surgical
outcomes
by
providing
real-time
feedback
automation
certain
operating
rooms.
The
data
opens
new
avenues
model
training
diseases
simulation,
enhancing
research
capabilities
improving
predictive
accuracy.
contexts,
improves
education,
public
relations,
revenue
cycle
marketing
etc.
Its
capacity
continuous
learning
adaptation
enables
it
drive
ongoing
improvements
operational
efficiencies,
making
delivery
proactive,
predictive,
precise.
JMIR Medical Education,
Год журнала:
2023,
Номер
9, С. e48785 - e48785
Опубликована: Сен. 28, 2023
Generative
artificial
intelligence
(AI)
technologies
are
increasingly
being
utilized
across
various
fields,
with
considerable
interest
and
concern
regarding
their
potential
application
in
medical
education.
These
technologies,
such
as
Chat
GPT
Bard,
can
generate
new
content
have
a
wide
range
of
possible
applications.
Medical Teacher,
Год журнала:
2024,
Номер
46(4), С. 446 - 470
Опубликована: Фев. 29, 2024
Background
Artificial
Intelligence
(AI)
is
rapidly
transforming
healthcare,
and
there
a
critical
need
for
nuanced
understanding
of
how
AI
reshaping
teaching,
learning,
educational
practice
in
medical
education.
This
review
aimed
to
map
the
literature
regarding
applications
education,
core
areas
findings,
potential
candidates
formal
systematic
gaps
future
research.
Smart Learning Environments,
Год журнала:
2024,
Номер
11(1)
Опубликована: Июнь 18, 2024
Abstract
The
growing
integration
of
artificial
intelligence
(AI)
dialogue
systems
within
educational
and
research
settings
highlights
the
importance
learning
aids.
Despite
examination
ethical
concerns
associated
with
these
technologies,
there
is
a
noticeable
gap
in
investigations
on
how
issues
AI
contribute
to
students’
over-reliance
systems,
such
affects
cognitive
abilities.
Overreliance
occurs
when
users
accept
AI-generated
recommendations
without
question,
leading
errors
task
performance
context
decision-making.
This
typically
arises
individuals
struggle
assess
reliability
or
much
trust
place
its
suggestions.
systematic
review
investigates
particularly
those
embedded
generative
models
for
academic
learning,
their
critical
capabilities
including
decision-making,
thinking,
analytical
reasoning.
By
using
Preferred
Reporting
Items
Systematic
Reviews
Meta-Analyses
(PRISMA)
guidelines,
our
evaluated
body
literature
addressing
contributing
factors
effects
contexts.
comprehensive
spanned
14
articles
retrieved
from
four
distinguished
databases:
ProQuest,
IEEE
Xplore,
ScienceDirect,
Web
Science.
Our
findings
indicate
that
stemming
impacts
abilities,
as
increasingly
favor
fast
optimal
solutions
over
slow
ones
constrained
by
practicality.
tendency
explains
why
prefer
efficient
shortcuts,
heuristics,
even
amidst
presented
technologies.
International Endodontic Journal,
Год журнала:
2023,
Номер
57(1), С. 108 - 113
Опубликована: Окт. 9, 2023
Chatbot
Generative
Pre-trained
Transformer
(ChatGPT)
is
a
generative
artificial
intelligence
(AI)
software
based
on
large
language
models
(LLMs),
designed
to
simulate
human
conversations
and
generate
novel
content
the
training
data
it
has
been
exposed
to.
The
aim
of
this
study
was
evaluate
consistency
accuracy
ChatGPT-generated
answers
clinical
questions
in
endodontics,
compared
provided
by
experts.Ninety-one
dichotomous
(yes/no)
were
categorized
into
three
levels
difficulty.
Twenty
randomly
selected
from
each
difficulty
level.
Sixty
generated
ChatGPT
for
question.
Two
endodontic
experts
independently
answered
60
questions.
Statistical
analysis
performed
using
SPSS
program
calculate
experts.
Confidence
intervals
(95%)
standard
deviations
used
estimate
variability.The
showed
high
(85.44%).
No
significant
differences
found
question
In
terms
answer
accuracy,
achieved
an
average
57.33%.
However,
observed
difficulty,
with
lower
easier
questions.Currently,
not
capable
replacing
dentists
decision-making.
As
ChatGPT's
performance
improves
through
deep
learning,
expected
become
more
useful
effective
field
endodontics.
careful
attention
ongoing
evaluation
are
needed
ensure
its
reliability
safety
BMC Medical Education,
Год журнала:
2024,
Номер
24(1)
Опубликована: Фев. 14, 2024
Abstract
Background
Large
language
models
like
ChatGPT
have
revolutionized
the
field
of
natural
processing
with
their
capability
to
comprehend
and
generate
textual
content,
showing
great
potential
play
a
role
in
medical
education.
This
study
aimed
quantitatively
evaluate
comprehensively
analysis
performance
on
three
types
national
examinations
China,
including
National
Medical
Licensing
Examination
(NMLE),
Pharmacist
(NPLE),
Nurse
(NNLE).
Methods
We
collected
questions
from
Chinese
NMLE,
NPLE
NNLE
year
2017
2021.
In
NMLE
NPLE,
each
exam
consists
4
units,
while
NNLE,
2
units.
The
figures,
tables
or
chemical
structure
were
manually
identified
excluded
by
clinician.
applied
direct
instruction
strategy
via
multiple
prompts
force
clear
answer
distinguish
between
single-choice
multiple-choice
questions.
Results
failed
pass
accuracy
threshold
0.6
any
over
five
years.
Specifically,
highest
recorded
was
0.5467,
which
attained
both
2018
0.5599
2017.
most
impressive
result
shown
2017,
an
0.5897,
is
also
our
entire
evaluation.
ChatGPT’s
showed
no
significant
difference
different
but
question
types.
performed
well
range
subject
areas,
clinical
epidemiology,
human
parasitology,
dermatology,
as
various
topics
such
molecules,
health
management
prevention,
diagnosis
screening.
Conclusions
These
results
indicate
spanning
show
large
future
high-quality
data
will
be
required
improve
performance.
Education Sciences,
Год журнала:
2024,
Номер
14(6), С. 656 - 656
Опубликована: Июнь 17, 2024
This
study
addresses
the
significant
challenge
posed
by
use
of
Large
Language
Models
(LLMs)
such
as
ChatGPT
on
integrity
online
examinations,
focusing
how
these
models
can
undermine
academic
honesty
demonstrating
their
latent
and
advanced
reasoning
capabilities.
An
iterative
self-reflective
strategy
was
developed
for
invoking
critical
thinking
higher-order
in
LLMs
when
responding
to
complex
multimodal
exam
questions
involving
both
visual
textual
data.
The
proposed
demonstrated
evaluated
real
subject
experts
performance
(GPT-4)
with
vision
estimated
an
additional
dataset
600
text
descriptions
questions.
results
indicate
that
invoke
multi-hop
capabilities
within
LLMs,
effectively
steering
them
towards
correct
answers
integrating
from
each
modality
into
final
response.
Meanwhile,
considerable
proficiency
being
able
answer
across
12
subjects.
These
findings
prior
assertions
about
limitations
emphasise
need
robust
security
measures
proctoring
systems
more
sophisticated
mitigate
potential
misconduct
enabled
AI
technologies.
Background
Large
language
models
(LLMs)
have
emerged
as
powerful
tools
capable
of
processing
and
generating
human-like
text.
These
LLMs,
such
ChatGPT
(OpenAI
Incorporated,
Mission
District,
San
Francisco,
United
States),
Google
Bard
(Alphabet
Inc.,
CA,
US),
Microsoft
Bing
(Microsoft
Corporation,
WA,
been
applied
across
various
domains,
demonstrating
their
potential
to
assist
in
solving
complex
tasks
improving
information
accessibility.
However,
application
case
vignettes
physiology
has
not
explored.
This
study
aimed
assess
the
performance
three
namely,
(3.5;
free
research
version),
(Experiment),
(precise),
answering
cases
Physiology.
Methods
cross-sectional
was
conducted
July
2023.
A
total
77
were
prepared
by
two
physiologists
validated
other
content
experts.
presented
each
LLM,
responses
collected.
Two
independently
rated
answers
provided
LLMs
based
on
accuracy.
The
ratings
measured
a
scale
from
0
4
according
structure
observed
learning
outcome
(pre-structural
=
0,
uni-structural
1,
multi-structural
2,
relational
3,
extended-abstract).
scores
among
compared
Friedman's
test
inter-observer
agreement
checked
intraclass
correlation
coefficient
(ICC).
Results
overall
for
ChatGPT,
Bing,
study,
with
cases,
found
be
3.19±0.3,
2.15±0.6,
2.91±0.5,
respectively,
p<0.0001.
Hence,
3.5
(free
version)
obtained
highest
score,
(Precise)
had
lowest
(Experiment)
fell
between
terms
performance.
average
ICC
values
0.858
(95%
CI:
0.777
0.91,
p<0.0001),
0.975
0.961
0.984,
0.964
0.944
0.977,
respectively.
Conclusion
outperformed
physiology.
students
teachers
may
think
about
choosing
educational
purposes
accordingly
case-based
Further
exploration
capabilities
is
needed
adopting
those
medical
education
support
clinical
decision-making.
Medical Education,
Год журнала:
2024,
Номер
unknown
Опубликована: Апрель 19, 2024
Abstract
Introduction
In
the
past
year,
use
of
large
language
models
(LLMs)
has
generated
significant
interest
and
excitement
because
their
potential
to
revolutionise
various
fields,
including
medical
education
for
aspiring
physicians.
Although
students
undergo
a
demanding
educational
process
become
competent
health
care
professionals,
emergence
LLMs
presents
promising
solution
challenges
like
information
overload,
time
constraints
pressure
on
clinical
educators.
However,
integrating
into
raises
critical
concerns
educators,
professionals
students.
This
systematic
review
aims
explore
LLM
applications
in
education,
specifically
impact
students'
learning
experiences.
Methods
A
search
was
performed
PubMed,
Web
Science
Embase
articles
discussing
using
selected
keywords
related
from
ChatGPT's
debut
until
February
2024.
Only
available
full
text
or
English
were
reviewed.
The
credibility
each
study
critically
appraised
by
two
independent
reviewers.
Results
identified
166
studies,
which
40
found
be
relevant
study.
Among
key
themes
included
capabilities,
benefits
such
as
personalised
regarding
content
accuracy.
Importantly,
42.5%
these
studies
evaluated
novel
way,
ChatGPT,
contexts
exams
clinical/biomedical
information,
highlighting
replicating
human‐level
performance
knowledge.
remaining
broadly
discussed
prospective
role
reflecting
keen
future
despite
current
constraints.
Conclusions
responsible
implementation
offers
opportunity
enhance
ensuring
accuracy,
emphasising
skill‐building
maintaining
ethical
safeguards
are
crucial.
Continuous
evaluation
interdisciplinary
collaboration
essential
appropriate
integration
education.
Communications Engineering,
Год журнала:
2024,
Номер
3(1)
Опубликована: Сен. 17, 2024
Computer-aided
diagnosis
(CAD)
has
advanced
medical
image
analysis,
while
large
language
models
(LLMs)
have
shown
potential
in
clinical
applications.
However,
LLMs
struggle
to
interpret
images,
which
are
critical
for
decision-making.
Here
we
show
a
strategy
integrating
with
CAD
networks.
The
framework
uses
LLMs'
knowledge
and
reasoning
enhance
network
outputs,
such
as
diagnosis,
lesion
segmentation,
report
generation,
by
summarizing
information
natural
language.
generated
reports
of
higher
quality
can
improve
the
performance
vision-based
models.
In
chest
X-rays,
an
LLM
using
ChatGPT
improved
16.42
percentage
points
compared
state-of-the-art
models,
GPT-3
provided
15.00
point
F1-score
improvement.
Our
allows
accurate
generation
creates
patient-friendly
interactive
system,
unlike
conventional
systems
only
understood
professionals.
This
approach
revolutionize
decision-making
patient
communication.
Nurse Education Today,
Год журнала:
2024,
Номер
135, С. 106121 - 106121
Опубликована: Фев. 6, 2024
To
examine
and
consolidate
literature
regarding
the
advantages
disadvantages
of
utilizing
ChatGPT
in
healthcare
education
research.
We
searched
seven
electronic
databases
(PubMed/Medline,
CINAHL,
Embase,
PsycINFO,
Scopus,
ProQuest
Dissertations
Theses
Global,
Web
Science)
from
November
2022
until
September
2023.
This
scoping
review
adhered
to
Arksey
O'Malley's
framework
followed
reporting
guidelines
outlined
PRISMA-ScR
checklist.
For
analysis,
we
employed
Thomas
Harden's
thematic
synthesis
framework.
A
total
100
studies
were
included.
An
overarching
theme,
"Forging
Future:
Bridging
Theory
Integration
ChatGPT"
emerged,
accompanied
by
two
main
themes
(1)
Enhancing
Healthcare
Education,
Research,
Writing
with
ChatGPT,
(2)
Controversies
Concerns
about
Education
Research
Writing,
subthemes.
Our
underscores
importance
acknowledging
legitimate
concerns
related
potential
misuse
such
as
'ChatGPT
hallucinations',
its
limited
understanding
specialized
knowledge,
impact
on
teaching
methods
assessments,
confidentiality
security
risks,
controversial
practice
crediting
it
a
co-author
scientific
papers,
among
other
considerations.
Furthermore,
our
also
recognizes
urgency
establishing
timely
regulations,
along
active
engagement
relevant
stakeholders,
ensure
responsible
safe
implementation
ChatGPT's
capabilities.
advocate
for
use
cross-verification
techniques
enhance
precision
reliability
generated
content,
adaptation
higher
curricula
incorporate
potential,
educators'
need
familiarize
themselves
technology
improve
their
literacy
approaches,
development
innovative
detect
usage.
data
protection
measures
should
be
prioritized
when
employing
transparent
becomes
crucial
integrating
into
academic
writing.