Journal Of Clinical Periodontology,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Dec. 26, 2024
Artificial
intelligence
(AI)
has
the
potential
to
enhance
healthcare
practices,
including
periodontology,
by
improving
diagnostics,
treatment
planning
and
patient
care.
This
study
introduces
'PerioGPT',
a
specialized
AI
model
designed
provide
up-to-date
periodontal
knowledge
using
GPT-4o
novel
retrieval-augmented
generation
(RAG)
system.
Cureus,
Journal Year:
2024,
Volume and Issue:
unknown
Published: May 8, 2024
Introduction:
ChatGPT
has
been
tested
in
many
disciplines,
but
only
a
few
have
involved
hearing
diagnosis
and
none
to
physiology
or
audiology
more
generally.
The
consistency
of
the
chatbot's
responses
same
question
posed
multiple
times
not
well
investigated
either.
This
study
aimed
assess
accuracy
repeatability
3.5
4
on
test
questions
concerning
objective
measures
hearing.
Of
particular
interest
was
short-term
which
here
four
separate
days
extended
over
one
week.
Methods:
We
used
30
single-answer,
multiple-choice
exam
from
one-year
course
methods
testing
were
five
both
(the
free
version)
paid
each
(two
week
two
following
week).
evaluated
terms
response
key.
To
evaluate
time,
percent
agreement
Cohen's
Kappa
calculated.
Results:
overall
48-49%,
while
that
65-69%.
consistently
failed
pass
threshold
50%
correct
responses.
Within
single
day,
76-79%
for
87-88%
(Cohen's
0.67-0.71
0.81-0.84
respectively).
between
different
75-79%
85-88%
0.65-0.69
0.80-0.85
Conclusion:
outperforms
higher
time.
However,
great
variability
casts
doubt
possible
professional
applications
versions.
Journal of Periodontal Research,
Journal Year:
2024,
Volume and Issue:
unknown
Published: July 18, 2024
Abstract
Introduction
The
emerging
rise
in
novel
computer
technologies
and
automated
data
analytics
has
the
potential
to
change
course
of
dental
education.
In
line
with
our
long‐term
goal
harnessing
power
AI
augment
didactic
teaching,
objective
this
study
was
quantify
compare
accuracy
responses
provided
by
ChatGPT
(GPT‐4
GPT‐3.5)
Google
Gemini,
three
primary
large
language
models
(LLMs),
human
graduate
students
(control
group)
annual
in‐service
examination
questions
posed
American
Academy
Periodontology
(AAP).
Methods
Under
a
comparative
cross‐sectional
design,
corpus
1312
from
AAP
administered
between
2020
2023
were
presented
LLMs.
Their
analyzed
using
chi‐square
tests,
performance
juxtaposed
scores
periodontal
residents
corresponding
years,
as
control
group.
Additionally,
two
sub‐analyses
performed:
one
on
LLMs
each
section
exam;
answering
most
difficult
questions.
Results
ChatGPT‐4
(total
average:
79.57%)
outperformed
all
groups
well
GPT‐3.5
Gemini
exam
years
(
p
<
.001).
This
chatbot
showed
an
range
78.80%
80.98%
across
various
years.
consistently
recorded
superior
70.65%
=
.01),
73.29%
.02),
75.73%
72.18%
.0008)
for
exams
compared
ChatGPT‐3.5,
which
achieved
62.5%,
68.24%,
69.83%,
59.27%
respectively.
(72.86%)
surpassed
average
first‐
(63.48%
±
31.67)
second‐year
(66.25%
31.61)
when
combined.
However,
it
could
not
surpass
that
third‐year
(69.06%
30.45).
Conclusions
Within
confines
analysis,
exhibited
robust
capability
terms
reliability
while
ChatGPT‐3.5
weaker
performance.
These
findings
underscore
deploying
educational
tool
periodontics
oral
implantology
domains.
current
limitations
these
such
inability
effectively
process
image‐based
inquiries,
propensity
generating
inconsistent
same
prompts,
achieving
high
(80%
GPT‐4)
but
absolute
rates
should
be
considered.
An
comparison
their
versus
capacity
is
required
further
develop
field
study.
Brain Sciences,
Journal Year:
2024,
Volume and Issue:
14(5), P. 465 - 465
Published: May 7, 2024
Testing
of
ChatGPT
has
recently
been
performed
over
a
diverse
range
topics.
However,
most
these
assessments
have
based
on
broad
domains
knowledge.
Here,
we
test
ChatGPT’s
knowledge
tinnitus,
an
important
but
specialized
aspect
audiology
and
otolaryngology.
involved
evaluating
answers
to
defined
set
10
questions
tinnitus.
Furthermore,
given
the
technology
is
advancing
quickly,
re-evaluated
responses
same
3
6
months
later.
The
accuracy
was
rated
by
experts
(the
authors)
using
Likert
scale
ranging
from
1
5.
Most
were
as
satisfactory
or
better.
did
detect
few
instances
where
not
accurate
might
be
considered
somewhat
misleading.
Over
first
months,
ratings
generally
improved,
there
no
more
significant
improvement
at
months.
In
our
judgment,
provided
unexpectedly
good
responses,
that
quite
specific.
Although
potentially
harmful
errors
identified,
some
mistakes
could
seen
shows
great
potential
if
further
developed
in
specific
areas,
for
now,
it
yet
ready
serious
application.
Frontiers in Dental Medicine,
Journal Year:
2025,
Volume and Issue:
5
Published: Jan. 6, 2025
Artificial
intelligence
has
dramatically
reshaped
our
interaction
with
digital
technologies,
ushering
in
an
era
where
advancements
AI
algorithms
and
Large
Language
Models
(LLMs)
have
natural
language
processing
(NLP)
systems
like
ChatGPT.
This
study
delves
into
the
impact
of
cutting-edge
LLMs,
notably
OpenAI's
ChatGPT,
on
medical
diagnostics,
a
keen
focus
dental
sector.
Leveraging
publicly
accessible
datasets,
these
models
augment
diagnostic
capabilities
professionals,
streamline
communication
between
patients
healthcare
providers,
enhance
efficiency
clinical
procedures.
The
advent
ChatGPT-4
is
poised
to
make
substantial
inroads
practices,
especially
realm
oral
surgery.
paper
sheds
light
current
landscape
explores
potential
future
research
directions
burgeoning
field
offering
valuable
insights
for
both
practitioners
developers.
Furthermore,
it
critically
assesses
broad
implications
challenges
within
various
sectors,
including
academia
healthcare,
thus
mapping
out
overview
AI's
role
transforming
diagnostics
enhanced
patient
care.
BMC Oral Health,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: May 24, 2024
Abstract
Background
The
use
of
artificial
intelligence
in
the
field
health
sciences
is
becoming
widespread.
It
known
that
patients
benefit
from
applications
on
various
issues,
especially
after
pandemic
period.
One
most
important
issues
this
regard
accuracy
information
provided
by
applications.
Objective
purpose
study
was
to
frequently
asked
questions
about
dental
amalgam,
as
determined
United
States
Food
and
Drug
Administration
(FDA),
which
one
these
resources,
Chat
Generative
Pre-trained
Transformer
version
4
(ChatGPT-4)
compare
content
answers
given
application
with
FDA.
Methods
were
directed
ChatGPT-4
May
8th
16th,
2023,
responses
recorded
compared
at
word
meaning
levels
using
ChatGPT.
FDA
webpage
also
recorded.
for
similarity
“Main
Idea”,
“Quality
Analysis”,
“Common
Ideas”,
“Inconsistent
Ideas”
between
ChatGPT-4’s
FDA’s
responses.
Results
similar
one-week
intervals.
In
comparison
guidance,
it
questions.
However,
although
there
some
similarities
general
aspects
recommendation
regarding
amalgam
removal
question,
two
texts
are
not
same,
they
offered
different
perspectives
replacement
fillings.
Conclusions
findings
indicate
ChatGPT-4,
an
based
application,
encompasses
current
accurate
its
removal,
providing
individuals
seeking
access
such
information.
Nevertheless,
we
believe
numerous
studies
required
assess
validity
reliability
across
diverse
subjects.
Dental Traumatology,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 22, 2024
ABSTRACT
Background/Aim
Artificial
intelligence
(AI)
chatbots
have
become
increasingly
prevalent
in
recent
years
as
potential
sources
of
online
healthcare
information
for
patients
when
making
medical/dental
decisions.
This
study
assessed
the
readability,
quality,
and
accuracy
responses
provided
by
three
AI
to
questions
related
traumatic
dental
injuries
(TDIs),
either
retrieved
from
popular
question‐answer
sites
or
manually
created
based
on
hypothetical
case
scenarios.
Materials
Methods
A
total
59
injury
queries
were
directed
at
ChatGPT
3.5,
4.0,
Google
Gemini.
Readability
was
evaluated
using
Flesch
Reading
Ease
(FRE)
Flesch–Kincaid
Grade
Level
(FKGL)
scores.
To
assess
response
quality
accuracy,
DISCERN
tool,
Global
Quality
Score
(GQS),
misinformation
scores
used.
The
understandability
actionability
analyzed
Patient
Education
Assessment
Tool
Printed
(PEMAT‐P)
tool.
Statistical
analysis
included
Kruskal–Wallis
with
Dunn's
post
hoc
test
non‐normal
variables,
one‐way
ANOVA
Tukey's
normal
variables
(
p
<
0.05).
Results
mean
FKGL
FRE
Gemini
11.2
49.25,
11.8
46.42,
10.1
51.91,
respectively,
indicating
that
difficult
read
required
a
college‐level
reading
ability.
3.5
had
lowest
PEMAT‐P
among
0.001).
4.0
rated
higher
(GQS
score
5)
compared
Conclusions
In
this
study,
although
widely
used,
some
misleading
inaccurate
about
TDIs.
contrast,
generated
more
accurate
comprehensive
answers,
them
reliable
auxiliary
sources.
However,
complex
issues
like
TDIs,
no
chatbot
can
replace
dentist
diagnosis,
treatment,
follow‐up
care.
Journal of Oral Rehabilitation,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Feb. 6, 2025
ABSTRACT
Background
Artificial
Intelligence
(AI)
has
been
widely
used
in
health
research,
but
the
effectiveness
of
large
language
models
(LLMs)
providing
accurate
information
on
bruxism
not
yet
evaluated.
Objectives
To
assess
readability,
accuracy
and
consistency
three
LLMs
responding
to
frequently
asked
questions
about
bruxism.
Methods
This
cross‐sectional
observational
study
utilised
Google
Trends
tool
identify
10
most
frequent
topics
Thirty
were
selected,
which
submitted
ChatGPT‐3.5,
ChatGPT‐4
Gemini
at
two
different
times
(T1
T2).
The
readability
was
measured
using
Flesch
Reading
Ease
(FRE)
Flesch–Kincaid
Grade
Level
(FKG)
metrics.
responses
evaluated
for
a
three‐point
scale,
verified
by
comparing
between
T1
T2.
Statistical
analysis
included
ANOVA,
chi‐squared
tests
Cohen's
kappa
coefficient
considering
p
value
0.5.
Results
In
terms
there
no
difference
FRE.
model
showed
lower
FKG
scores
than
Generative
Pretrained
Transformer
(GPT)‐3.5
GPT‐4
models.
average
68.33%
GPT‐3.5,
65%
55%
Gemini,
with
significant
differences
(
=
0.290).
Consistency
substantial
all
models,
highest
being
GPT‐3.5
(95%).
demonstrated
agreement
Conclusion
Gemini's
potentially
more
accessible
broader
patient
population.
moderate
accuracy,
indicating
that
these
tools
should
replace
professional
dental
guidance.
International Dental Journal,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Feb. 1, 2025
In
the
final
part
of
this
two
article
on
artificial
intelligence
(AI)
in
dentistry
we
review
its
transformative
role,
focusing
AI
dental
education,
patient
communications,
challenges
integration,
strategies
to
overcome
barriers,
ethical
considerations,
and
finally,
recently
released
International
Dental
Federation
(FDI)
Communique
(white
paper)
Dentistry.
education
is
highlighted
for
potential
enhancing
theoretical
practical
dimensions,
including
telemonitoring
virtual
training
ecosystems.
Challenges
integration
are
outlined,
such
as
data
availability,
bias,
human
accountability.
Strategies
these
include
promoting
literacy,
establishing
regulations,
specific
implementations.
Ethical
considerations
within
dentistry,
privacy
algorithm
emphasized.
The
need
clear
guidelines
ongoing
evaluation
systems
crucial.
FDI
White
Paper
Dentistry
provides
insights
into
significance
oral
care,
research,
along
with
standards
governance.
It
discusses
AI's
impact
individual
patients,
community
health,
research.
paper
addresses
biases,
limited
generalizability,
accessibility,
regulatory
requirements
practice.
conclusion,
plays
a
significant
role
modern
offering
benefits
diagnosis,
treatment
planning,
decision-making.
While
facing
challenges,
strategic
initiatives
targeted
implementations
can
help
barriers
maximize
dentistry.
essential
ensuring
responsible,
effective
efficacious
deployment
technologies
ecosystem.