International Journal of Colorectal Disease,
Год журнала:
2024,
Номер
39(1)
Опубликована: Июнь 20, 2024
Abstract
Purpose
To
examine
the
ability
of
generative
artificial
intelligence
(GAI)
to
answer
patients’
questions
regarding
colorectal
cancer
(CRC).
Methods
Ten
clinically
relevant
about
CRC
were
selected
from
top-rated
hospitals’
websites
and
patient
surveys
presented
three
GAI
tools
(Chatbot
Generative
Pre-Trained
Transformer
[GPT-4],
Google
Bard,
CLOVA
X).
Their
responses
compared
with
answers
information
book.
Response
evaluation
was
performed
by
two
groups,
each
consisting
five
healthcare
professionals
(HCP)
patients.
Each
question
scored
on
a
1–5
Likert
scale
based
four
criteria
(maximum
score,
20
points/question).
Results
In
an
analysis
including
only
HCPs,
book
11.8
±
1.2,
GPT-4
13.5
1.1,
Bard
11.5
0.7,
X
12.2
1.4
(
P
=
0.001).
The
score
significantly
higher
than
those
0.020)
patients,
14.1
1.4,
15.2
1.8,
15.5
14.4
without
significant
differences
0.234).
When
both
groups
evaluators
included,
13.0
0.9,
1.0,
13.3
1.5
0.070).
Conclusion
GAIs
demonstrated
similar
or
better
communicative
competence
related
surgery
in
Korean.
If
high-quality
medical
provided
is
supervised
properly
HCPs
published
as
book,
it
could
be
helpful
for
patients
obtain
accurate
make
informed
decisions.
Abstract
Background
The
use
of
artificial
intelligence
in
the
field
health
sciences
is
becoming
widespread.
It
known
that
patients
benefit
from
applications
on
various
issues,
especially
after
pandemic
period.
One
most
important
issues
this
regard
accuracy
information
provided
by
applications.
Objective
purpose
study
was
to
frequently
asked
questions
about
dental
amalgam,
as
determined
United
States
Food
and
Drug
Administration
(FDA),
which
one
these
resources,
Chat
Generative
Pre-trained
Transformer
version
4
(ChatGPT-4)
compare
content
answers
given
application
with
FDA.
Methods
were
directed
ChatGPT-4
May
8th
16th,
2023,
responses
recorded
compared
at
word
meaning
levels
using
ChatGPT.
FDA
webpage
also
recorded.
for
similarity
“Main
Idea”,
“Quality
Analysis”,
“Common
Ideas”,
“Inconsistent
Ideas”
between
ChatGPT-4’s
FDA’s
responses.
Results
similar
one-week
intervals.
In
comparison
guidance,
it
questions.
However,
although
there
some
similarities
general
aspects
recommendation
regarding
amalgam
removal
question,
two
texts
are
not
same,
they
offered
different
perspectives
replacement
fillings.
Conclusions
findings
indicate
ChatGPT-4,
an
based
application,
encompasses
current
accurate
its
removal,
providing
individuals
seeking
access
such
information.
Nevertheless,
we
believe
numerous
studies
required
assess
validity
reliability
across
diverse
subjects.
The Laryngoscope,
Год журнала:
2024,
Номер
134(10), С. 4225 - 4231
Опубликована: Апрель 26, 2024
Understanding
the
strengths
and
weaknesses
of
chatbots
as
a
source
patient
information
is
critical
for
providers
in
rising
artificial
intelligence
landscape.
This
study
first
to
quantitatively
analyze
compare
four
most
used
available
regarding
treatments
common
pathologies
rhinology.
Journal of Medicine Surgery and Public Health,
Год журнала:
2024,
Номер
2, С. 100078 - 100078
Опубликована: Фев. 27, 2024
The
integration
of
AI-powered
ChatGPT
in
oral
and
maxillofacial
surgery
marks
a
transformative
shift
healthcare,
enhancing
diagnostics,
treatment
planning,
patient
communication,
surgical
training.
Its
rapid
analysis
vast
datasets
ensures
precise,
personalized
diagnoses
strategies,
minimizing
risks
improving
outcomes.
facilitates
virtual
consultations,
educates
patients,
serves
as
real-time
assistant
during
procedures,
while
AI-driven
simulations
refine
the
skills
aspiring
surgeons
secure
environment.
Despite
challenges
like
data
privacy
algorithm
validation,
ongoing
research
promises
to
bolster
AI's
role
surgery.
Overall,
ChatGPT's
incorporation
reshapes
surgery,
promising
heightened
precision,
efficiency,
care
quality,
ultimately
revolutionizing
practices
well-being.
International Journal of Colorectal Disease,
Год журнала:
2024,
Номер
39(1)
Опубликована: Июнь 20, 2024
Abstract
Purpose
To
examine
the
ability
of
generative
artificial
intelligence
(GAI)
to
answer
patients’
questions
regarding
colorectal
cancer
(CRC).
Methods
Ten
clinically
relevant
about
CRC
were
selected
from
top-rated
hospitals’
websites
and
patient
surveys
presented
three
GAI
tools
(Chatbot
Generative
Pre-Trained
Transformer
[GPT-4],
Google
Bard,
CLOVA
X).
Their
responses
compared
with
answers
information
book.
Response
evaluation
was
performed
by
two
groups,
each
consisting
five
healthcare
professionals
(HCP)
patients.
Each
question
scored
on
a
1–5
Likert
scale
based
four
criteria
(maximum
score,
20
points/question).
Results
In
an
analysis
including
only
HCPs,
book
11.8
±
1.2,
GPT-4
13.5
1.1,
Bard
11.5
0.7,
X
12.2
1.4
(
P
=
0.001).
The
score
significantly
higher
than
those
0.020)
patients,
14.1
1.4,
15.2
1.8,
15.5
14.4
without
significant
differences
0.234).
When
both
groups
evaluators
included,
13.0
0.9,
1.0,
13.3
1.5
0.070).
Conclusion
GAIs
demonstrated
similar
or
better
communicative
competence
related
surgery
in
Korean.
If
high-quality
medical
provided
is
supervised
properly
HCPs
published
as
book,
it
could
be
helpful
for
patients
obtain
accurate
make
informed
decisions.