Revista da Associação Médica Brasileira,
Journal Year:
2023,
Volume and Issue:
69(10)
Published: Jan. 1, 2023
The
aim
of
this
study
was
to
evaluate
the
performance
ChatGPT-4.0
in
answering
2022
Brazilian
National
Examination
for
Medical
Degree
Revalidation
(Revalida)
and
as
a
tool
provide
feedback
on
quality
examination.A
total
two
independent
physicians
entered
all
examination
questions
into
ChatGPT-4.0.
After
comparing
outputs
with
test
solutions,
they
classified
large
language
model
answers
adequate,
inadequate,
or
indeterminate.
In
cases
disagreement,
adjudicated
achieved
consensus
decision
ChatGPT
accuracy.
across
medical
themes
nullified
compared
using
chi-square
statistical
analysis.In
Revalida
examination,
answered
71
(87.7%)
correctly
10
(12.3%)
incorrectly.
There
no
statistically
significant
difference
proportions
correct
among
different
(p=0.4886).
artificial
intelligence
had
lower
accuracy
71.4%
questions,
(p=0.241)
between
non-nullified
groups.ChatGPT-4.0
showed
satisfactory
Revalidation.
exhibited
worse
subjective
public
healthcare
themes.
results
suggested
that
overall
is
corroborates
questions.
Applied System Innovation,
Journal Year:
2023,
Volume and Issue:
6(5), P. 96 - 96
Published: Oct. 23, 2023
The
field
of
health
and
medical
sciences
has
witnessed
a
surge
published
research
exploring
the
applications
ChatGPT.
However,
there
remains
dearth
knowledge
regarding
its
specific
potential
limitations
within
domain
nutrition.
Given
increasing
prevalence
nutrition-related
diseases,
is
critical
need
to
prioritize
promotion
comprehensive
understanding
This
paper
examines
utility
ChatGPT
as
tool
for
improving
nutrition
knowledge.
Specifically,
it
scrutinizes
characteristics
in
relation
personalized
meal
planning,
dietary
advice
guidance,
food
intake
tracking,
educational
materials,
other
commonly
found
features
applications.
Additionally,
explores
support
each
stage
Nutrition
Care
Process.
Addressing
prevailing
question
whether
can
replace
healthcare
professionals,
this
elucidates
substantial
context
practice
education.
These
encompass
factors
such
incorrect
responses,
coordinated
services,
hands-on
demonstration,
physical
examination,
verbal
non-verbal
cues,
emotional
psychological
aspects,
real-time
monitoring
feedback,
wearable
device
integration,
ethical
privacy
concerns
have
been
highlighted.
In
summary,
holds
promise
valuable
enhancing
knowledge,
but
further
development
are
needed
optimize
capabilities
domain.
Journal of Computer Assisted Learning,
Journal Year:
2023,
Volume and Issue:
40(2), P. 919 - 930
Published: Dec. 21, 2023
Abstract
Background
The
increasing
prevalence
of
Artificial
Intelligence
(AI)
language
models,
exemplified
by
ChatGPT,
has
sparked
inquiries
into
their
influence
on
creative
writing
skills
in
educational
contexts.
This
study
aims
to
quantitatively
investigate
whether
ChatGPT's
use
negatively
affects
university
students'
abilities,
focusing
originality,
content
presentation,
accuracy,
and
elaboration
essays.
research
adopts
an
experimental
approach
shed
light
this
concern.
Objective
the
utilization
AI
chatbot,
adversely
specific
dimensions
among
students,
with
emphasis
elaboration.
Method
involves
600
students
from
10
universities,
divided
a
control
group
(EGp).
EGp
incorporates
ChatGPT
process
as
intervention.
evaluates
elaboration,
utilizing
Wilcoxon
Signed‐Rank
Test
for
analysis.
Results
Conclusion
findings
reveal
detrimental
association
between
abilities.
Analysing
both
machine‐based
human‐based
assessments
substantiates
earlier
qualitative
observations
regarding
adverse
impact
writing.
highlights
necessity
approaching
integration,
particularly
disciplines,
caution.
While
tools
have
merits,
integration
should
be
thoughtful,
considering
potential
drawbacks.
These
insights
inform
future
practices,
guiding
effective
incorporation
while
nurturing
skills.
Nurse Education Today,
Journal Year:
2024,
Volume and Issue:
135, P. 106121 - 106121
Published: Feb. 6, 2024
To
examine
and
consolidate
literature
regarding
the
advantages
disadvantages
of
utilizing
ChatGPT
in
healthcare
education
research.
We
searched
seven
electronic
databases
(PubMed/Medline,
CINAHL,
Embase,
PsycINFO,
Scopus,
ProQuest
Dissertations
Theses
Global,
Web
Science)
from
November
2022
until
September
2023.
This
scoping
review
adhered
to
Arksey
O'Malley's
framework
followed
reporting
guidelines
outlined
PRISMA-ScR
checklist.
For
analysis,
we
employed
Thomas
Harden's
thematic
synthesis
framework.
A
total
100
studies
were
included.
An
overarching
theme,
"Forging
Future:
Bridging
Theory
Integration
ChatGPT"
emerged,
accompanied
by
two
main
themes
(1)
Enhancing
Healthcare
Education,
Research,
Writing
with
ChatGPT,
(2)
Controversies
Concerns
about
Education
Research
Writing,
subthemes.
Our
underscores
importance
acknowledging
legitimate
concerns
related
potential
misuse
such
as
'ChatGPT
hallucinations',
its
limited
understanding
specialized
knowledge,
impact
on
teaching
methods
assessments,
confidentiality
security
risks,
controversial
practice
crediting
it
a
co-author
scientific
papers,
among
other
considerations.
Furthermore,
our
also
recognizes
urgency
establishing
timely
regulations,
along
active
engagement
relevant
stakeholders,
ensure
responsible
safe
implementation
ChatGPT's
capabilities.
advocate
for
use
cross-verification
techniques
enhance
precision
reliability
generated
content,
adaptation
higher
curricula
incorporate
potential,
educators'
need
familiarize
themselves
technology
improve
their
literacy
approaches,
development
innovative
detect
usage.
data
protection
measures
should
be
prioritized
when
employing
transparent
becomes
crucial
integrating
into
academic
writing.
Journal of the American Medical Informatics Association,
Journal Year:
2024,
Volume and Issue:
31(6), P. 1436 - 1440
Published: Jan. 25, 2024
Abstract
Purpose
This
article
explores
the
potential
of
large
language
models
(LLMs)
to
automate
administrative
tasks
in
healthcare,
alleviating
burden
on
clinicians
caused
by
electronic
medical
records.
Potential
LLMs
offer
opportunities
clinical
documentation,
prior
authorization,
patient
education,
and
access
care.
They
can
personalize
scheduling,
improve
documentation
accuracy,
streamline
insurance
increase
engagement,
address
barriers
healthcare
access.
Caution
However,
integrating
requires
careful
attention
security
privacy
concerns,
protecting
data,
complying
with
regulations
like
Health
Insurance
Portability
Accountability
Act
(HIPAA).
It
is
crucial
acknowledge
that
should
supplement,
not
replace,
human
connection
care
provided
professionals.
Conclusion
By
prudently
utilizing
alongside
expertise,
organizations
outcomes.
Implementation
be
approached
caution
consideration
ensure
safe
effective
use
setting.
BMC Medical Education,
Journal Year:
2024,
Volume and Issue:
24(1)
Published: March 29, 2024
Abstract
Background
Writing
multiple
choice
questions
(MCQs)
for
the
purpose
of
medical
exams
is
challenging.
It
requires
extensive
knowledge,
time
and
effort
from
educators.
This
systematic
review
focuses
on
application
large
language
models
(LLMs)
in
generating
MCQs.
Methods
The
authors
searched
studies
published
up
to
November
2023.
Search
terms
focused
LLMs
generated
MCQs
examinations.
Non-English,
out
year
range
not
focusing
AI
multiple-choice
were
excluded.
MEDLINE
was
used
as
a
search
database.
Risk
bias
evaluated
using
tailored
QUADAS-2
tool.
Results
Overall,
eight
between
April
2023
October
included.
Six
Chat-GPT
3.5,
while
two
employed
GPT
4.
Five
showed
that
can
produce
competent
valid
exams.
Three
write
but
did
evaluate
validity
questions.
One
study
conducted
comparative
analysis
different
models.
other
compared
LLM-generated
with
those
written
by
humans.
All
presented
faulty
deemed
inappropriate
Some
required
additional
modifications
order
qualify.
Conclusions
be
However,
their
limitations
cannot
ignored.
Further
this
field
essential
more
conclusive
evidence
needed.
Until
then,
may
serve
supplementary
tool
writing
2
at
high
risk
bias.
followed
Preferred
Reporting
Items
Systematic
Reviews
Meta-Analyses
(PRISMA)
guidelines.
JAMA Network Open,
Journal Year:
2024,
Volume and Issue:
7(4), P. e244630 - e244630
Published: April 2, 2024
Artificial
intelligence
(AI)
large
language
models
(LLMs)
demonstrate
potential
in
simulating
human-like
dialogue.
Their
efficacy
accurate
patient-clinician
communication
within
radiation
oncology
has
yet
to
be
explored.
PLOS Digital Health,
Journal Year:
2024,
Volume and Issue:
3(5), P. e0000503 - e0000503
Published: May 23, 2024
Generative
artificial
intelligence
(AI)
can
exhibit
biases,
compromise
data
privacy,
misinterpret
prompts
that
are
adversarial
attacks,
and
produce
hallucinations.
Despite
the
potential
of
generative
AI
for
many
applications
in
digital
health,
practitioners
must
understand
these
tools
their
limitations.
This
scoping
review
pays
particular
attention
to
challenges
with
technologies
medical
settings
surveys
solutions.
Using
PubMed,
we
identified
a
total
120
articles
published
by
March
2024,
which
reference
evaluate
medicine,
from
synthesized
themes
suggestions
future
work.
After
first
discussing
general
background
on
AI,
focus
collecting
presenting
6
key
health
specific
measures
be
taken
mitigate
challenges.
Overall,
bias,
hallucination,
regulatory
compliance
were
frequently
considered,
while
other
concerns
around
such
as
overreliance
text
models,
misprompting,
jailbreaking,
not
commonly
evaluated
current
literature.