Internet Interventions,
Год журнала:
2023,
Номер
34, С. 100668 - 100668
Опубликована: Сен. 9, 2023
Lesbian,
gay,
bisexual,
transgender,
and
queer/questioning
(LGBTQ+)
youth
are
at
higher
risk
of
isolation
depression
than
their
heterosexual
peers.
Having
access
to
tailored
mental
health
resources
is
a
documented
concern
for
rural
living
LGBTQ+
youth.
Social
media
provides
connections
broader
like-minded
community
peers,
but
it
also
vehicle
negative
interactions.
We
developed
REALbot,
an
automated,
social
media-based
educational
intervention
improve
efficacy,
reduce
perceived
isolation,
bolster
This
report
presents
data
on
the
acceptability,
feasibility,
utility
REALbot
among
its
target
audience
youth.We
conducted
week-long
exploratory
study
with
single
non-comparison
group
20
rural-living
aged
14-19
recruited
from
test
our
Facebook-
Instagram-delivered
chatbot.
assessed
pre-
post-test
scores
self-efficacy,
(4-item
Patient-Reported
Outcomes
Measurement
System
-
PROMIS),
depressive
symptoms
(Patient
Health
Questionnaire,
Adolescent
Version
PHQ-A).
At
post-test,
we
acceptability
(User
Experience
Questionnaire
UEQ-S),
usability
(Chatbot
Usability
-CUQ
Post-Study
Satisfaction
-PSSUQ),
satisfaction
chatbot
(Client
CSQ),
along
two
open-ended
questions
'likes'
'dislikes'
about
intervention.
compared
standard
univariate
statistics.
Means
deviations
were
calculated
usability,
satisfaction.
To
analyze
responses
open-end
questions,
used
content
analysis
approach.Acceptability
was
high
UEQ-S
5.3
out
7
(SD
=
1.1)
received
CUQ
PSSUQ
(mean
score
(M)
78.0,
SD
14.5
M
86.9,
25.2,
respectively),
as
well
user
CSQ
(M
24.9,
5.4).
Themes
related
organized
in
main
categories:
provided.
Participants
engaged
chatbot,
sending
average
49.3
messages
43.6,
median
30).
Pre-/post-
changes
self-efficacy
not
significant
(p's
>
0.08).REALbot
deployment
found
be
feasible
acceptable,
good
scores.
reported
most
outcomes
interest
effect
sizes
small
medium.
Additional
development
formal
evaluation
feasibility
engagement
behavioral
targets
warranted.
npj Digital Medicine,
Год журнала:
2023,
Номер
6(1)
Опубликована: Дек. 19, 2023
Abstract
Conversational
artificial
intelligence
(AI),
particularly
AI-based
conversational
agents
(CAs),
is
gaining
traction
in
mental
health
care.
Despite
their
growing
usage,
there
a
scarcity
of
comprehensive
evaluations
impact
on
and
well-being.
This
systematic
review
meta-analysis
aims
to
fill
this
gap
by
synthesizing
evidence
the
effectiveness
CAs
improving
factors
influencing
user
experience.
Twelve
databases
were
searched
for
experimental
studies
CAs’
effects
illnesses
psychological
well-being
published
before
May
26,
2023.
Out
7834
records,
35
eligible
identified
review,
out
which
15
randomized
controlled
trials
included
meta-analysis.
The
revealed
that
significantly
reduce
symptoms
depression
(Hedge’s
g
0.64
[95%
CI
0.17–1.12])
distress
0.7
0.18–1.22]).
These
more
pronounced
are
multimodal,
generative
AI-based,
integrated
with
mobile/instant
messaging
apps,
targeting
clinical/subclinical
elderly
populations.
However,
CA-based
interventions
showed
no
significant
improvement
overall
0.32
–0.13
0.78]).
User
experience
was
largely
shaped
quality
human-AI
therapeutic
relationships,
content
engagement,
effective
communication.
findings
underscore
potential
addressing
issues.
Future
research
should
investigate
underlying
mechanisms
effectiveness,
assess
long-term
across
various
outcomes,
evaluate
safe
integration
large
language
models
(LLMs)
Journal of Medical Internet Research,
Год журнала:
2023,
Номер
25, С. e43862 - e43862
Опубликована: Март 10, 2023
Mental
health
problems
are
a
crucial
global
public
concern.
Owing
to
their
cost-effectiveness
and
accessibility,
conversational
agent
interventions
(CAIs)
promising
in
the
field
of
mental
care.
Journal of Medical Internet Research,
Год журнала:
2024,
Номер
26, С. e56930 - e56930
Опубликована: Апрель 12, 2024
Background
Chatbots,
or
conversational
agents,
have
emerged
as
significant
tools
in
health
care,
driven
by
advancements
artificial
intelligence
and
digital
technology.
These
programs
are
designed
to
simulate
human
conversations,
addressing
various
care
needs.
However,
no
comprehensive
synthesis
of
chatbots’
roles,
users,
benefits,
limitations
is
available
inform
future
research
application
the
field.
Objective
This
review
aims
describe
characteristics,
focusing
on
their
diverse
roles
pathway,
user
groups,
limitations.
Methods
A
rapid
published
literature
from
2017
2023
was
performed
with
a
search
strategy
developed
collaboration
sciences
librarian
implemented
MEDLINE
Embase
databases.
Primary
studies
reporting
chatbot
benefits
were
included.
Two
reviewers
dual-screened
results.
Extracted
data
subjected
content
analysis.
Results
The
categorized
into
2
themes:
delivery
remote
services,
including
patient
support,
management,
education,
skills
building,
behavior
promotion,
provision
administrative
assistance
providers.
User
groups
spanned
across
patients
chronic
conditions
well
cancer;
individuals
focused
lifestyle
improvements;
demographic
such
women,
families,
older
adults.
Professionals
students
also
alongside
seeking
mental
behavioral
change,
educational
enhancement.
chatbots
classified
improvement
quality
efficiency
cost-effectiveness
delivery.
identified
encompassed
ethical
challenges,
medicolegal
safety
concerns,
technical
difficulties,
experience
issues,
societal
economic
impacts.
Conclusions
Health
offer
wide
spectrum
applications,
potentially
impacting
aspects
care.
While
they
promising
for
improving
quality,
integration
system
must
be
approached
consideration
ensure
optimal,
safe,
equitable
use.
Turkish Journal of Emergency Medicine,
Год журнала:
2023,
Номер
23(3), С. 156 - 161
Опубликована: Июнь 22, 2023
OBJECTIVES:
Artificial
intelligence
companies
have
been
increasing
their
initiatives
recently
to
improve
the
results
of
chatbots,
which
are
software
programs
that
can
converse
with
a
human
in
natural
language.
The
role
chatbots
health
care
is
deemed
worthy
research.
OpenAI’s
ChatGPT
supervised
and
empowered
machine
learning-based
chatbot.
aim
this
study
was
determine
performance
emergency
medicine
(EM)
triage
prediction.
METHODS:
This
preliminary,
cross-sectional
conducted
case
scenarios
generated
by
researchers
based
on
severity
index
(ESI)
handbook
v4
cases.
Two
independent
EM
specialists
who
were
experts
ESI
scale
determined
categories
for
each
case.
A
third
specialist
consulted
as
arbiter,
if
necessary.
Consensus
scenario
assumed
reference
category.
Subsequently,
queried
answer
recorded
Inconsistent
classifications
between
category
defined
over-triage
(false
positive)
or
under-triage
negative).
RESULTS:
Fifty
assessed
study.
Reliability
analysis
showed
fair
agreement
(Cohen’s
Kappa:
0.341).
Eleven
cases
(22%)
over
triaged
9
(18%)
under
ChatGPT.
In
(18%),
reported
two
consecutive
categories,
one
matched
expert
consensus.
It
had
an
overall
sensitivity
57.1%
(95%
confidence
interval
[CI]:
34–78.2),
specificity
34.5%
CI:
17.9–54.3),
positive
predictive
value
(PPV)
38.7%
21.8–57.8),
negative
(NPV)
52.6
28.9–75.6),
F1
score
0.461.
high
acuity
(ESI-1
ESI-2),
76.2%
52.8–91.8),
93.1%
77.2–99.2),
PPV
88.9%
65.3–98.6),
NPV
84.4
67.2–94.7),
0.821.
receiver
operating
characteristic
curve
area
0.846
0.724–0.969,
P
<
0.001)
CONCLUSION:
best
when
predicting
ESI-2).
may
be
useful
determining
requiring
critical
care.
When
trained
more
medical
knowledge,
accurate
other
predictions.
Journal of Medical Internet Research,
Год журнала:
2023,
Номер
25, С. e51712 - e51712
Опубликована: Сен. 29, 2023
Artificial
intelligence
chatbot
research
has
focused
on
technical
advances
in
natural
language
processing
and
validating
the
effectiveness
of
human-machine
conversations
specific
settings.
However,
real-world
chat
data
remain
proprietary
unexplored
despite
their
growing
popularity,
new
analyses
uses
effects
mitigating
negative
moods
are
urgently
needed.In
this
study,
we
investigated
whether
how
artificial
chatbots
facilitate
expression
user
emotions,
specifically
sadness
depression.
We
also
examined
cultural
differences
depressive
among
users
Western
Eastern
countries.This
study
used
SimSimi,
a
global
open-domain
social
chatbot,
to
analyze
152,783
conversation
utterances
containing
terms
"depress"
"sad"
3
countries
(Canada,
United
Kingdom,
States)
5
(Indonesia,
India,
Malaysia,
Philippines,
Thailand).
Study
1
reports
findings
people
talk
about
depression
based
Linguistic
Inquiry
Word
Count
n-gram
analyses.
In
2,
classified
into
predefined
topics
using
semisupervised
classification
techniques
better
understand
types
prevalent
chats.
then
identified
distinguishing
features
chat-based
discourse
disparity
between
users.Our
revealed
intriguing
differences.
Chatbot
indicated
stronger
emotions
than
(positive:
P<.001;
negative:
P=.01);
for
example,
more
words
associated
with
(P=.01).
were
likely
share
vulnerable
such
as
mental
health
(P<.001),
group
had
greater
tendency
discuss
sensitive
swear
(P<.001)
death
(P<.001).
addition,
when
talking
chatbots,
expressed
differently
other
platforms.
Users
open
expressing
emotional
vulnerability
related
or
sad
(74,045/148,590,
49.83%)
media
(149/1978,
7.53%).
tended
not
broach
that
require
support
from
others,
seeking
advice
daily
life
difficulties,
unlike
media.
acted
anticipation
conversational
agents
exhibit
active
listening
skills
foster
safe
space
where
they
can
openly
states
depression.The
highlight
potential
chatbot-assisted
support,
emphasizing
importance
continued
policy-wise
efforts
improve
interactions
those
need
assistance.
Our
indicate
possibility
providing
helpful
information
moods,
especially
who
have
difficulty
communicating
humans.
Frontiers in Psychiatry,
Год журнала:
2023,
Номер
14
Опубликована: Июнь 1, 2023
Growing
demand
for
broadly
accessible
mental
health
care,
together
with
the
rapid
development
of
new
technologies,
trigger
discussions
about
feasibility
psychotherapeutic
interventions
based
on
interactions
Conversational
Artificial
Intelligence
(CAI).
Many
authors
argue
that
while
currently
available
CAI
can
be
a
useful
supplement
human-delivered
psychotherapy,
it
is
not
yet
capable
delivering
fully
fledged
psychotherapy
its
own.
The
goal
this
paper
to
investigate
what
are
most
important
obstacles
our
way
developing
systems
in
future.
To
end,
we
formulate
and
discuss
three
challenges
central
quest.
Firstly,
might
able
develop
effective
AI-based
unless
deepen
understanding
makes
effective.
Secondly,
assuming
requires
building
therapeutic
relationship,
clear
whether
delivered
by
non-human
agents.
Thirdly,
conducting
problem
too
complicated
narrow
AI,
i.e.,
AI
proficient
dealing
only
relatively
simple
well-delineated
tasks.
If
case,
should
expect
fully-fledged
until
so-called
"general"
or
"human-like"
developed.
While
believe
all
these
ultimately
overcome,
think
being
mindful
them
crucial
ensure
well-balanced
steady
progress
path
psychotherapy.
npj Digital Medicine,
Год журнала:
2024,
Номер
7(1)
Опубликована: Март 19, 2024
Automated
conversational
agents
(CAs)
emerged
as
a
promising
solution
in
mental
health
interventions
among
young
people.
Therefore,
the
objective
of
this
scoping
review
is
to
examine
current
state
research
into
fully
automated
CAs
mediated
for
emotional
component
Selected
databases
were
searched
March
2023.
Included
studies
primary
research,
reporting
on
development,
feasibility/usability,
or
evaluation
tool
improve
population.
Twenty-five
included
(N
=
1707).
Most
applications
standalone
preventions
targeting
anxiety
and
depression.
predominantly
AI-based
chatbots,
using
text
main
communication
channel.
Overall,
results
showed
that
problems
are
acceptable,
engaging
with
high
usability.
However,
clinical
efficacy
far
less
conclusive,
since
almost
half
reported
no
significant
effect
outcomes.
Based
these
findings,
it
can
be
concluded
there
pressing
need
existing
increase
their
well
conducting
more
rigorous
methodological
area.
Applied Sciences,
Год журнала:
2024,
Номер
14(13), С. 5889 - 5889
Опубликована: Июль 5, 2024
Mental
health
disorders
are
a
leading
cause
of
disability
worldwide,
and
there
is
global
shortage
mental
professionals.
AI
chatbots
have
emerged
as
potential
solution,
offering
accessible
scalable
interventions.
This
study
aimed
to
conduct
scoping
review
evaluate
the
effectiveness
feasibility
in
treating
conditions.
A
literature
search
was
conducted
across
multiple
databases,
including
MEDLINE,
Scopus,
PsycNet,
well
using
AI-powered
tools
like
Microsoft
Copilot
Consensus.
Relevant
studies
on
chatbot
interventions
for
were
selected
based
predefined
inclusion
exclusion
criteria.
Data
extraction
quality
assessment
performed
independently
by
reviewers.
The
yielded
15
eligible
covering
various
application
areas,
such
support
during
COVID-19,
specific
conditions
(e.g.,
depression,
anxiety,
substance
use
disorders),
preventive
care,
promotion,
usability
assessments.
demonstrated
benefits
improving
emotional
well-being,
addressing
conditions,
facilitating
behavior
change.
However,
challenges
related
usability,
engagement,
integration
with
existing
healthcare
systems
identified.
hold
promise
interventions,
but
widespread
adoption
hinges
systems.
Enhancing
personalization
context-specific
adaptation
key.
Future
research
should
focus
large-scale
trials,
optimal
human–AI
integration,
ethical
social
implications.
Topoi,
Год журнала:
2024,
Номер
43(3), С. 795 - 807
Опубликована: Апрель 6, 2024
Abstract
Conversational
Artificial
Intelligence
(CAI)
systems
(also
known
as
AI
“chatbots”)
are
among
the
most
promising
examples
of
use
technology
in
mental
health
care.
With
already
millions
users
worldwide,
CAI
is
likely
to
change
landscape
psychological
help.
Most
researchers
agree
that
existing
CAIs
not
“digital
therapists”
and
using
them
a
substitute
for
psychotherapy
delivered
by
human.
But
if
they
therapists,
what
they,
role
can
play
care?
To
answer
these
questions,
we
appeal
two
well-established
widely
discussed
concepts:
cognitive
affective
artifacts.
Cognitive
artifacts
artificial
devices
contributing
functionally
performance
task.
Affective
objects
which
have
capacity
alter
subjects’
state.
We
argue
therapeutic
kind
cognitive-affective
contribute
positive
(i)
simulating
(quasi-)therapeutic
interaction,
(ii)
supporting
tasks,
(iii)
altering
condition
their
users.
This
sheds
new
light
on
why
virtually
all
implement
principles
techniques
Behavioral
Therapy
—
orientation
according
and,
ultimately,
mediated
change.
Simultaneously,
it
allows
us
conceptualize
better
potential
limitations
applying
technologies
therapy.