JMIR AI,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Dec. 9, 2024
People
with
schizophrenia
often
present
cognitive
impairments
that
may
hinder
their
ability
to
learn
about
condition.
Education
platforms
powered
by
Large
Language
Models
(LLMs)
have
the
potential
improve
accessibility
of
mental
health
information.
However,
black-box
nature
LLMs
raises
ethical
and
safety
concerns
regarding
controllability
over
chatbots.
In
particular,
prompt-engineered
chatbots
drift
from
intended
role
as
conversation
progresses
become
more
prone
hallucinations.
To
develop
evaluate
a
Critical
Analysis
Filter
(CAF)
system
ensures
an
LLM-powered
chatbot
reliably
complies
predefined
its
instructions
scope
while
delivering
validated
For
proof-of-concept,
we
educational
GPT-4
can
dynamically
access
information
manual
written
for
people
caregivers.
CAF,
team
LLM
agents
are
used
critically
analyze
refine
chatbot's
responses
deliver
real-time
feedback
chatbot.
assess
CAF
re-establish
adherence
instructions,
generate
three
conversations
(by
conversing
disabled)
wherein
starts
towards
various
unintended
roles.
We
use
these
checkpoint
initialize
automated
between
adversarial
designed
entice
it
Conversations
were
repeatedly
sampled
enabled
disabled
respectively.
Three
human
raters
independently
rated
each
response
according
criteria
developed
measure
integrity;
specifically,
transparency
(such
admitting
when
statement
lacks
explicit
support
scripted
sources)
tendency
faithfully
convey
in
manual.
total,
36
(3
different
conversations,
3
per
checkpoint,
4
queries
conversation)
compliance
Activating
resulted
score
was
considered
acceptable
(≥2)
67.0%
responses,
compared
only
8.7%
deactivated.
Although
rigorous
testing
realistic
scenarios
is
needed,
our
results
suggest
self-reflection
mechanisms
could
enable
be
effectively
safely
platforms.
This
approach
harnesses
flexibility
constraining
appropriate
accurate
interactions.
Social
anxiety
(SA)
has
become
increasingly
prevalent.
Traditional
coping
strategies
often
face
accessibility
challenges.
Generative
AI
(GenAI),
known
for
their
knowledgeable
and
conversational
capabilities,
are
emerging
as
alternative
tools
mental
well-being.
With
the
increased
integration
of
GenAI,
it
is
important
to
examine
individuals'
attitudes
trust
in
GenAI
chatbots'
support
SA.
Through
a
mixed-method
approach
that
involved
surveys
(n
=
159)
interviews
17),
we
found
individuals
with
severe
symptoms
tended
embrace
chatbots
more
readily,
valuing
non-judgmental
perceived
emotional
comprehension.
However,
those
milder
prioritized
technical
reliability.
We
identified
factors
influencing
trust,
such
ability
generate
empathetic
responses
its
context-sensitive
limitations,
which
were
particularly
among
also
discuss
design
implications
use
fostering
cognitive
practical
considerations.
Frontiers in Digital Health,
Journal Year:
2025,
Volume and Issue:
7
Published: Feb. 4, 2025
Introduction
Externalization
techniques
are
well
established
in
psychotherapy
approaches,
including
narrative
therapy
and
cognitive
behavioral
therapy.
These
methods
elicit
internal
experiences
such
as
emotions
make
them
tangible
through
external
representations.
Recent
advances
generative
artificial
intelligence
(GenAI),
specifically
large
language
models
(LLMs),
present
new
possibilities
for
therapeutic
interventions;
however,
their
integration
into
core
practices
remains
largely
unexplored.
This
study
aimed
to
examine
the
clinical,
ethical,
theoretical
implications
of
integrating
GenAI
space
a
proof-of-concept
(POC)
AI-driven
externalization
techniques,
while
emphasizing
essential
role
human
therapist.
Methods
To
this
end,
we
developed
two
customized
GPTs
agents:
VIVI
(visual
externalization),
which
uses
DALL-E
3
create
images
reflecting
patients'
(e.g.,
depression
or
hope),
DIVI
(dialogic
role-play-based
simulates
conversations
with
aspects
content.
tools
were
implemented
evaluated
clinical
case
under
professional
psychological
guidance.
Results
The
demonstrated
that
can
serve
an
“artificial
third”,
creating
Winnicottian
playful
enhances,
rather
than
supplants,
dyadic
therapist-patient
relationship.
successfully
externalized
complex
dynamics,
offering
avenues,
also
revealing
challenges
empathic
failures
cultural
biases.
Discussion
findings
highlight
both
promise
ethical
complexities
AI-enhanced
therapy,
concerns
about
data
security,
representation
accuracy,
balance
authority.
address
these
challenges,
propose
SAFE-AI
protocol,
clinicians
structured
guidelines
responsible
AI
Future
research
should
systematically
evaluate
generalizability,
efficacy,
across
diverse
populations
contexts.
Frontiers in Psychology,
Journal Year:
2025,
Volume and Issue:
16
Published: March 20, 2025
Background
The
rise
of
artificial
intelligence
(AI)
is
promising
novel
contributions
to
treatment
and
prevention
mental
ill
health.
While
research
on
the
use
conversational
embodied
AI
in
psychotherapy
practice
developing
rapidly,
it
leaves
gaps
understanding
impact
that
creative
might
have
art
specifically.
A
constructive
dialogue
between
disciplines
needed,
establish
potential
relevance
AI-bases
technologies
therapeutic
involving
artmaking
self-expression.
Methods
This
integrative
review
set
out
explore
whether
how
could
enhance
other
psychological
interventions
utilizing
visual
communication
and/or
artmaking.
transdisciplinary
search
strategy
was
developed
capture
latest
across
diverse
methodologies
stages
development,
including
reviews,
opinion
papers,
prototype
development
empirical
studies.
Findings
Of
over
550
records
screened,
10
papers
were
included
this
review.
Their
key
characteristics
are
mapped
a
matrix
stakeholder
groups
involved,
elements
belonging
therapy
domain,
types
AI-based
involved.
Themes
significance
for
AT
discussed,
cultural
adaptability,
inclusivity
accessibility,
creativity
self-expression,
unpredictability
imperfection.
positioning
diagram
proposed
describe
role
AT.
AI’s
process
oscillates
spectrum
from
being
partner
co-creative
taking
curator
personalized
visuals
with
intent.
Another
dimension
indicates
level
autonomy
–
supportive
tool
an
autonomous
agent.
Examples
each
these
situations
identified
reviewed
literature.
Conclusion
brings
opportunities
new
modes
self-expression
extended
reach
therapy,
over-reliance
presents
risks
process,
loss
agency
clients
therapists.
Implications
technology
relationship
demand
further
investigation,
as
do
its
impacts,
before
can
be
confirmed.
Advances in information security, privacy, and ethics book series,
Journal Year:
2025,
Volume and Issue:
unknown, P. 369 - 404
Published: Feb. 14, 2025
In
the
era
of
digitization,
Artificial
Intelligence
(AI)
integration
in
healthcare
has
become
a
necessity
to
ensure
patient
identification
&
privacy.
With
rise
digitalisation
health
systems,
it
also
increasingly
important
have
more
stringent
data
protection
requirements.
This
transformation
is
heavily
facilitated
by
AI-driven
technologies
that
reinforce
security,
identify
real-time
threats
and
streamline
compliance
with
regulations.
By
utilizing
Machine
Learning
(ML)
algorithms
comb
through
big
data,
outliers
can
be
pinpointed
filtered
out
so
unauthorized
access
prevented
assistance
advanced
forms
encryption
which
protect
information
while
transit
or
at
rest.
But
fast
pace
AI
development
creates
as
many
opportunities
challenges,
especially
when
comes
marrying
availability
These
ethical
concerns,
for
effective
regulatory
frameworks,
are
critical
an
evolving
ecosystem
technologies.
Research Square (Research Square),
Journal Year:
2025,
Volume and Issue:
unknown
Published: May 5, 2025
Abstract
Artificial
intelligence
(AI)
technologies
in
mental
healthcare
offer
promising
opportunities
to
reduce
therapists’
burden
and
enhance
delivery,
yet
adoption
remains
challenging.
This
study
identified
key
facilitators
barriers
AI
healthcare,
precisely
psychotherapy,
by
conducting
six
online
focus
groups
with
patients
therapists,
using
a
semi-structured
guide
based
on
the
NASSS
(Nonadoption,
Abandonment,
Scale-up,
Spread,
Sustainability)
framework.
Data
from
N
=
32
participants
were
analyzed
combined
deductive
inductive
thematic
analysis.
Across
seven
domains,
36
categories
emerged.
Sixteen
as
factors
facilitating
adoption,
including
useful
technology
elements,
customization
user
needs,
cost
coverage.
Eleven
perceived
encompassing
lack
of
human
contact,
resource
constraints,
dependency.
Further
nine,
such
therapeutic
approach
institutional
differences,
acted
both
depending
context.
Our
findings
highlight
complexity
emphasize
importance
addressing
early
development
technologies.
IGI Global eBooks,
Journal Year:
2025,
Volume and Issue:
unknown, P. 123 - 150
Published: May 13, 2025
The
study
of
brain
activity
has
always
fascinated
researchers,
clinicians,
and
innovators
alike.
Electroencephalography
(EEG),
as
a
widely-used
non-invasive
technique,
plays
critical
role
in
capturing
analyzing
electrical
patterns
the
brain.
It
historically
provided
invaluable
insights
into
neurological
disorders,
cognitive
functions,
overall
health.
However,
traditional
methods
EEG
analysis
have
often
faced
limitations
precision,
scalability,
speed.
emergence
Artificial
Intelligence
(AI)
revolutionized
this
field,
enabling
more
advanced
data
interpretation
uncovering
that
were
previously
inaccessible.
This
chapter
delves
intersection
AI,
exploring
transformative
impact
AI-driven
healthcare,
neuroscience,
brain-computer
interfaces
(BCIs).
provides
structured
exploration
foundational
principles,
recent
advancements,
practical
applications,
future
opportunities,
while
addressing
challenges
inherent
integrating
these
two
disciplines.