Frontiers in Psychiatry,
Journal Year:
2023,
Volume and Issue:
14
Published: Dec. 22, 2023
The
sudden
appearance
and
devastating
effects
of
the
COVID-19
pandemic
resulted
in
need
for
multiple
adaptive
changes
societies,
business
operations
healthcare
systems
across
world.
This
review
describes
development
increased
use
digital
technologies
such
as
chat
bots,
electronic
diaries,
online
questionnaires
even
video
gameplay
to
maintain
effective
treatment
standards
individuals
with
mental
health
conditions
depression,
anxiety
post-traumatic
stress
syndrome.
We
describe
how
these
approaches
have
been
applied
help
meet
challenges
delivering
solutions.
main
focus
this
narrative
is
on
describing
platforms
used
diagnostics,
patient
monitoring
a
option
general
public,
well
frontline
medical
staff
suffering
issues.
Journal of Applied Learning & Teaching,
Journal Year:
2023,
Volume and Issue:
6(2)
Published: Sept. 26, 2023
As
the
application
of
Artificial
Intelligence
(AI)
continues
to
permeate
various
sectors,
educational
landscape
is
no
exception.
Several
AI
in
education
(AIEd)
applications,
like
chatbots,
present
an
intriguing
array
opportunities
and
challenges.
This
paper
provides
in-depth
exploration
use
role
research,
focusing
on
benefits
(the
good)
potential
pitfalls
bad
ugly)
associated
with
deployment
chatbots
other
AIEDs.
The
explored
include
personalised
learning,
facilitation
administrative
tasks,
enriched
research
capabilities,
provision
a
platform
for
collaboration.
These
advantages
are
balanced
against
downsides,
such
as
job
displacement,
misinformation,
plagiarism,
erosion
human
connection.
Ethical
considerations,
particularly
concerning
data
privacy,
bias
reinforcement,
digital
divide,
also
examined.
Conclusions
drawn
from
this
analysis
stress
importance
striking
balance
between
capabilities
elements
education,
well
developing
comprehensive
ethical
frameworks
contexts.
Journal of Multidisciplinary Healthcare,
Journal Year:
2024,
Volume and Issue:
Volume 17, P. 461 - 471
Published: Jan. 1, 2024
Background:
Artificial
Intelligence
(AI)
applications
are
widely
researched
for
their
potential
in
effectively
improving
the
healthcare
operations
and
disease
management.
However,
research
trend
shows
that
these
also
have
significant
negative
implications
on
service
delivery.
Purpose:
To
assess
use
of
ChatGPT
mental
health
support.
Methods:
Due
to
novelty
unfamiliarity
technology,
a
quasi-experimental
design
was
chosen
this
study.
Outpatients
from
public
hospital
were
included
sample.
A
two-week
experiment
followed
by
semi-structured
interviews
conducted
which
participants
used
Semi-structured
with
24
individuals
conditions.
Results:
Eight
positive
factors
(psychoeducation,
emotional
support,
goal
setting
motivation,
referral
resource
information,
self-assessment
monitoring,
cognitive
behavioral
therapy,
crisis
interventions,
psychotherapeutic
exercises)
four
(ethical
legal
considerations,
accuracy
reliability,
limited
assessment
capabilities,
cultural
linguistic
considerations)
associated
Conclusion:
It
is
important
carefully
consider
ethical,
accuracy,
challenges
develop
appropriate
strategies
mitigate
them
order
ensure
safe
effective
AI-based
like
Keywords:
ChatGPT,
artificial
intelligence,
mentally-ill
patients,
anxiety
World Psychiatry,
Journal Year:
2024,
Volume and Issue:
23(2), P. 176 - 190
Published: May 10, 2024
In
response
to
the
mass
adoption
and
extensive
usage
of
Internet-enabled
devices
across
world,
a
major
review
published
in
this
journal
2019
examined
impact
Internet
on
human
cognition,
discussing
concepts
ideas
behind
"online
brain".
Since
then,
online
world
has
become
further
entwined
with
fabric
society,
extent
which
we
use
such
technologies
continued
grow.
Furthermore,
research
evidence
ways
affects
mind
advanced
considerably.
paper,
sought
draw
upon
latest
data
from
large-scale
epidemiological
studies
systematic
reviews,
along
randomized
controlled
trials
qualitative
recently
emerging
topic,
order
now
provide
multi-dimensional
overview
impacts
psychological,
cognitive
societal
outcomes.
Within
this,
detail
empirical
how
effects
differ
according
various
factors
as
age,
gender,
types.
We
also
new
examining
more
experiential
aspects
individuals'
lives,
understand
specifics
their
interactions
Internet,
lifestyle,
determine
benefits
or
drawbacks
time.
Additionally,
explore
nascent
but
intriguing
areas
culturomics,
artificial
intelligence,
virtual
reality,
augmented
reality
are
changing
our
understanding
can
interact
brain
behavior.
Overall,
importance
taking
an
individualized
approach
mental
health,
cognition
social
functioning
is
clear.
emphasize
need
for
guidelines,
policies
initiatives
around
make
full
available
neuroscientific,
behavioral
levels
presented
herein.
JAMA Network Open,
Journal Year:
2025,
Volume and Issue:
8(2), P. e2457879 - e2457879
Published: Feb. 4, 2025
Importance
There
is
much
interest
in
the
clinical
integration
of
large
language
models
(LLMs)
health
care.
Many
studies
have
assessed
ability
LLMs
to
provide
advice,
but
quality
their
reporting
uncertain.
Objective
To
perform
a
systematic
review
examine
variability
among
peer-reviewed
evaluating
performance
generative
artificial
intelligence
(AI)–driven
chatbots
for
summarizing
evidence
and
providing
advice
inform
development
Chatbot
Assessment
Reporting
Tool
(CHART).
Evidence
Review
A
search
MEDLINE
via
Ovid,
Embase
Elsevier,
Web
Science
from
inception
October
27,
2023,
was
conducted
with
help
sciences
librarian
yield
7752
articles.
Two
reviewers
screened
articles
by
title
abstract
followed
full-text
identify
primary
accuracy
AI-driven
(chatbot
studies).
then
performed
data
extraction
137
eligible
studies.
Findings
total
were
included.
Studies
examined
topics
surgery
(55
[40.1%]),
medicine
(51
[37.2%]),
care
(13
[9.5%]).
focused
on
treatment
(91
[66.4%]),
diagnosis
(60
[43.8%]),
or
disease
prevention
(29
[21.2%]).
Most
(136
[99.3%])
evaluated
inaccessible,
closed-source
did
not
enough
information
version
LLM
under
evaluation.
All
lacked
sufficient
description
characteristics,
including
temperature,
token
length,
fine-tuning
availability,
layers,
other
details.
describe
prompt
engineering
phase
study.
The
date
querying
reported
54
(39.4%)
(89
[65.0%])
used
subjective
means
define
successful
chatbot,
while
less
than
one-third
addressed
ethical,
regulatory,
patient
safety
implications
LLMs.
Conclusions
Relevance
In
this
chatbot
studies,
heterogeneous
may
CHART
standards.
Ethical,
considerations
are
crucial
as
grows
JMIR Medical Education,
Journal Year:
2023,
Volume and Issue:
10, P. e51523 - e51523
Published: Oct. 30, 2023
Background
Large
language
models
(LLMs)
have
revolutionized
natural
processing
with
their
ability
to
generate
human-like
text
through
extensive
training
on
large
data
sets.
These
models,
including
Generative
Pre-trained
Transformers
(GPT)-3.5
(OpenAI),
GPT-4
and
Bard
(Google
LLC),
find
applications
beyond
processing,
attracting
interest
from
academia
industry.
Students
are
actively
leveraging
LLMs
enhance
learning
experiences
prepare
for
high-stakes
exams,
such
as
the
National
Eligibility
cum
Entrance
Test
(NEET)
in
India.
Objective
This
comparative
analysis
aims
evaluate
performance
of
GPT-3.5,
GPT-4,
answering
NEET-2023
questions.
Methods
In
this
paper,
we
evaluated
3
mainstream
LLMs,
namely
Google
Bard,
questions
related
exam.
The
NEET
were
provided
these
artificial
intelligence
responses
recorded
compared
against
correct
answers
official
answer
key.
Consensus
was
used
all
models.
Results
It
evident
that
passed
entrance
test
flying
colors
(300/700,
42.9%),
showcasing
exceptional
performance.
On
other
hand,
GPT-3.5
managed
meet
qualifying
criteria,
but
a
substantially
lower
score
(145/700,
20.7%).
However,
(115/700,
16.4%)
failed
criteria
did
not
pass
test.
demonstrated
consistent
superiority
over
subjects.
Specifically,
achieved
accuracy
rates
73%
(29/40)
physics,
44%
(16/36)
chemistry,
51%
(50/99)
biology.
Conversely,
attained
an
rate
45%
(18/40)
33%
(13/26)
34%
(34/99)
consensus
metric
showed
matching
between
well
had
higher
incidences
being
correct,
at
0.56
0.57,
respectively,
which
stood
0.42.
When
considered
together,
reached
highest
0.59.
Conclusions
study’s
findings
provide
valuable
insights
into
emerged
most
accurate
model,
highlighting
its
potential
educational
applications.
Cross-checking
across
may
result
confusion
(as
duos
or
trio)
tend
agree
only
little
half
responses.
Using
one
will
consensus.
results
underscore
suitability
exams
positive
impact
education.
Additionally,
study
establishes
benchmark
evaluating
enhancing
LLMs’
tasks,
promoting
responsible
informed
use
diverse
environments.
Frontiers in Artificial Intelligence,
Journal Year:
2024,
Volume and Issue:
7
Published: June 18, 2024
The
release
of
GPT-4
has
garnered
widespread
attention
across
various
fields,
signaling
the
impending
adoption
and
application
Large
Language
Models
(LLMs).
However,
previous
research
predominantly
focused
on
technical
principles
ChatGPT
its
social
impact,
overlooking
effects
human–computer
interaction
user
psychology.
This
paper
explores
multifaceted
impacts
interaction,
psychology,
society
through
a
literature
review.
author
investigates
ChatGPT’s
foundation,
including
Transformer
architecture
RLHF
(Reinforcement
Learning
from
Human
Feedback)
process,
enabling
it
to
generate
human-like
responses.
In
terms
studies
significant
improvements
GPT
models
bring
conversational
interfaces.
analysis
extends
psychological
impacts,
weighing
potential
mimic
human
empathy
support
learning
against
risks
reduced
interpersonal
connections.
commercial
domains,
discusses
applications
in
customer
service
services,
highlighting
efficiency
challenges
such
as
privacy
issues.
Finally,
offers
predictions
recommendations
for
future
development
directions
impact
relationships.
JMIR Mental Health,
Journal Year:
2024,
Volume and Issue:
11, P. e57400 - e57400
Published: Sept. 3, 2024
Background
Large
language
models
(LLMs)
are
advanced
artificial
neural
networks
trained
on
extensive
datasets
to
accurately
understand
and
generate
natural
language.
While
they
have
received
much
attention
demonstrated
potential
in
digital
health,
their
application
mental
particularly
clinical
settings,
has
generated
considerable
debate.
Objective
This
systematic
review
aims
critically
assess
the
use
of
LLMs
specifically
focusing
applicability
efficacy
early
screening,
interventions,
settings.
By
systematically
collating
assessing
evidence
from
current
studies,
our
work
analyzes
models,
methodologies,
data
sources,
outcomes,
thereby
highlighting
challenges
present,
prospects
for
use.
Methods
Adhering
PRISMA
(Preferred
Reporting
Items
Systematic
Reviews
Meta-Analyses)
guidelines,
this
searched
5
open-access
databases:
MEDLINE
(accessed
by
PubMed),
IEEE
Xplore,
Scopus,
JMIR,
ACM
Digital
Library.
Keywords
used
were
(mental
health
OR
illness
disorder
psychiatry)
AND
(large
models).
study
included
articles
published
between
January
1,
2017,
April
30,
2024,
excluded
languages
other
than
English.
Results
In
total,
40
evaluated,
including
15
(38%)
conditions
suicidal
ideation
detection
through
text
analysis,
7
(18%)
as
conversational
agents,
18
(45%)
applications
evaluations
health.
show
good
effectiveness
detecting
issues
providing
accessible,
destigmatized
eHealth
services.
However,
assessments
also
indicate
that
risks
associated
with
might
surpass
benefits.
These
include
inconsistencies
text;
production
hallucinations;
absence
a
comprehensive,
benchmarked
ethical
framework.
Conclusions
examines
inherent
risks.
The
identifies
several
issues:
lack
multilingual
annotated
experts,
concerns
regarding
accuracy
reliability
content,
interpretability
due
“black
box”
nature
LLMs,
ongoing
dilemmas.
clear,
framework;
privacy
issues;
overreliance
both
physicians
patients,
which
could
compromise
traditional
medical
practices.
As
result,
should
not
be
considered
substitutes
professional
rapid
development
underscores
valuable
aids,
emphasizing
need
continued
research
area.
Trial
Registration
PROSPERO
CRD42024508617;
https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=508617
JMIR Mental Health,
Journal Year:
2024,
Volume and Issue:
11, P. e56569 - e56569
Published: April 27, 2024
Abstract
Large
language
model
(LLM)–powered
services
are
gaining
popularity
in
various
applications
due
to
their
exceptional
performance
many
tasks,
such
as
sentiment
analysis
and
answering
questions.
Recently,
research
has
been
exploring
potential
use
digital
health
contexts,
particularly
the
mental
domain.
However,
implementing
LLM-enhanced
conversational
artificial
intelligence
(CAI)
presents
significant
ethical,
technical,
clinical
challenges.
In
this
viewpoint
paper,
we
discuss
2
challenges
that
affect
of
CAI
for
individuals
with
issues,
focusing
on
case
patients
depression:
tendency
humanize
lack
contextualized
robustness.
Our
approach
is
interdisciplinary,
relying
considerations
from
philosophy,
psychology,
computer
science.
We
argue
humanization
hinges
reflection
what
it
means
simulate
“human-like”
features
LLMs
role
these
systems
should
play
interactions
humans.
Further,
ensuring
contextualization
robustness
requires
considering
specificities
production
depression,
well
its
evolution
over
time.
Finally,
provide
a
series
recommendations
foster
responsible
design
deployment
therapeutic
support
depression.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: Jan. 7, 2025
To
explore
the
attitudes
of
healthcare
professionals
and
public
on
applying
ChatGPT
in
clinical
practice.
The
successful
application
practice
depends
technical
performance
critically
perceptions
non-healthcare
healthcare.
This
study
has
a
qualitative
design
based
artificial
intelligence.
was
divided
into
five
steps:
data
collection,
cleaning,
validation
relevance,
sentiment
analysis,
content
analysis
using
K-means
algorithm.
comprised
3130
comments
amounting
to
1,593,650
words.
dictionary
method
showed
positive
negative
emotions
such
as
anger,
disgust,
fear,
sadness,
surprise,
good,
happy
emotions.
Healthcare
prioritized
ChatGPT's
efficiency
but
raised
ethical
accountability
concerns,
while
valued
its
accessibility
emotional
support
expressed
worries
about
privacy
misinformation.
Bridging
these
perspectives
by
improving
reliability,
safeguarding
privacy,
clearly
defining
role
is
essential
for
practical
integration