Chat Generative Pre-Trained Transformer (ChatGPT) in Oral and Maxillofacial Surgery: A Narrative Review on Its Research Applications and Limitations
Sung-Woon On,
No information about this author
Seoung-Won Cho,
No information about this author
Sang‐Yoon Park
No information about this author
et al.
Journal of Clinical Medicine,
Journal Year:
2025,
Volume and Issue:
14(4), P. 1363 - 1363
Published: Feb. 18, 2025
Objectives:
This
review
aimed
to
evaluate
the
role
of
ChatGPT
in
original
research
articles
within
field
oral
and
maxillofacial
surgery
(OMS),
focusing
on
its
applications,
limitations,
future
directions.
Methods:
A
literature
search
was
conducted
PubMed
using
predefined
terms
Boolean
operators
identify
utilizing
published
up
October
2024.
The
selection
process
involved
screening
studies
based
their
relevance
OMS
with
26
meeting
final
inclusion
criteria.
Results:
has
been
applied
various
OMS-related
domains,
including
clinical
decision
support
real
virtual
scenarios,
patient
practitioner
education,
scientific
writing
referencing,
ability
answer
licensing
exam
questions.
As
a
tool,
demonstrated
moderate
accuracy
(approximately
70-80%).
It
showed
high
(up
90%)
providing
guidance
information.
However,
reliability
remains
inconsistent
across
different
necessitating
further
evaluation.
Conclusions:
While
presents
potential
benefits
OMS,
particularly
supporting
decisions
improving
access
medical
information,
it
should
not
be
regarded
as
substitute
for
clinicians
must
used
an
adjunct
tool.
Further
validation
technological
refinements
are
required
enhance
effectiveness
settings.
Language: Английский
The impact of the large language model ChatGPT in oral and maxillofacial surgery: A systematic review
British Journal of Oral and Maxillofacial Surgery,
Journal Year:
2025,
Volume and Issue:
unknown
Published: March 1, 2025
Language: Английский
Performance of artificial intelligence chatbots in responding to the frequently asked questions of patients regarding dental prostheses
BMC Oral Health,
Journal Year:
2025,
Volume and Issue:
25(1)
Published: April 15, 2025
Artificial
intelligence
(AI)
chatbots
are
increasingly
used
in
healthcare
to
address
patient
questions
by
providing
personalized
responses.
Evaluating
their
performance
is
essential
ensure
reliability.
This
study
aimed
assess
the
of
three
AI
responding
frequently
asked
(FAQs)
patients
regarding
dental
prostheses.
Thirty-one
were
collected
from
accredited
organizations'
websites
and
"People
Also
Ask"
feature
Google,
focusing
on
removable
fixed
prosthodontics.
Two
board-certified
prosthodontists
evaluated
response
quality
using
modified
Global
Quality
Score
(GQS)
a
5-point
Likert
scale.
Inter-examiner
agreement
was
assessed
weighted
kappa.
Readability
measured
Flesch-Kincaid
Grade
Level
(FKGL)
Flesch
Reading
Ease
(FRE)
indices.
Statistical
analyses
performed
repeated
measures
ANOVA
Friedman
test,
with
Bonferroni
correction
for
pairwise
comparisons
(α
=
0.05).
The
inter-examiner
good.
Among
chatbots,
Google
Gemini
had
highest
score
(4.58
±
0.50),
significantly
outperforming
Microsoft
Copilot
(3.87
0.89)
(P
=.004).
analysis
showed
ChatGPT
(10.45
1.26)
produced
more
complex
responses
compared
(7.82
1.19)
(8.38
1.59)
<.001).
FRE
scores
indicated
that
ChatGPT's
categorized
as
fairly
difficult
(53.05
7.16),
while
Gemini's
plain
English
(64.94
7.29),
significant
difference
between
them
show
great
potential
answering
inquiries
about
However,
improvements
needed
enhance
effectiveness
education
tools.
Language: Английский