Editorial: Responsible Design, Integration, and Use of Generative AI in Mental Health (Preprint) DOI Creative Commons
Oren Asman, John Torous, Amir Tal

et al.

Published: Dec. 21, 2024

BACKGROUND Editorial OBJECTIVE METHODS RESULTS CONCLUSIONS CLINICALTRIAL

Language: Английский

The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy DOI Creative Commons
Yuval Haber, Inbar Levkovich, Dorit Hadar‐Shoval

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e54781 - e54781

Published: April 18, 2024

This paper explores a significant shift in the field of mental health general and psychotherapy particular following generative artificial intelligence's new capabilities processing generating humanlike language. Following Freud, this lingo-technological development is conceptualized as "fourth narcissistic blow" that science inflicts on humanity. We argue blow has potentially dramatic influence perceptions human society, interrelationships, self. should, accordingly, expect changes therapeutic act emergence what we term third psychotherapy. The introduction an marks critical juncture, prompting us to ask important core questions address two basic elements thinking, namely, transparency autonomy: (1) What presence therapy relationships? (2) How does it reshape our perception ourselves interpersonal dynamics? (3) remains irreplaceable at therapy? Given ethical implications arise from these questions, proposes can be valuable asset when applied with insight consideration, enhancing but not replacing touch therapy.

Language: Английский

Citations

21

An Ethical Perspective on the Democratization of Mental Health With Generative AI DOI Creative Commons
Zohar Elyoseph, Tamar Gur, Yuval Haber

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e58011 - e58011

Published: July 24, 2024

Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides sociohistorical perspective for theme issue "Responsible Design, Integration, Use Generative AI in Mental Health." It evaluates ethical considerations using generative artificial intelligence (GenAI) democratization mental health knowledge practice. explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as language models. The highlights why represent new phase movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing intelligence-based applications. tool both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient-centered.

Language: Английский

Citations

9

Editorial: Responsible Design, Integration, and Use of Generative AI in Mental Health (Preprint) DOI Creative Commons
Oren Asman, John Torous, Amir Tal

et al.

JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e70439 - e70439

Published: Jan. 6, 2025

Abstract Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration deliberation perhaps governance. This is the first published theme issue focused on GenAI health. It brings together evidence insights GenAI’s capabilities, such as emotion recognition, therapy-session summarization, risk assessment, while highlighting sensitive nature of health data need rigorous validation. Contributors discuss how bias, alignment with human values, transparency, empathy must be carefully addressed to ensure ethically grounded, intelligence–assisted care. By proposing conceptual frameworks; best practices; regulatory approaches, including ethics care preservation socially important humanistic elements, this underscores that can complement, rather than replace, vital role clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, technologists essential.

Language: Английский

Citations

1

The promise and pitfalls of generative AI DOI
Monojit Choudhury, Zohar Elyoseph,

Nathanael J. Fast

et al.

Nature Reviews Psychology, Journal Year: 2025, Volume and Issue: unknown

Published: Jan. 15, 2025

Language: Английский

Citations

1

The impact of history of depression and access to weapons on suicide risk assessment: a comparison of ChatGPT-3.5 and ChatGPT-4 DOI Creative Commons
Shiri Shinan‐Altman, Zohar Elyoseph, Inbar Levkovich

et al.

PeerJ, Journal Year: 2024, Volume and Issue: 12, P. e17468 - e17468

Published: May 29, 2024

The aim of this study was to evaluate the effectiveness ChatGPT-3.5 and ChatGPT-4 in incorporating critical risk factors, namely history depression access weapons, into suicide assessments. Both models assessed using scenarios that featured individuals with without a weapons. estimated likelihood suicidal thoughts, attempts, serious suicide-related mortality on Likert scale. A multivariate three-way ANOVA analysis Bonferroni post hoc tests conducted examine impact forementioned independent factors (history weapons) these outcome variables. identified as significant factor. demonstrated more nuanced understanding relationship between depression, risk. In contrast, displayed limited insight complex relationship. consistently assigned higher severity ratings variables than did ChatGPT-3.5. highlights potential two models, particularly ChatGPT-4, enhance assessment by considering factors.

Language: Английский

Citations

4

The Feasibility of Large Language Models in Verbal Comprehension Assessment: A Proof-of-Concept Study (Preprint) DOI Creative Commons
Dorit Hadar‐Shoval, Maya Lvovsky, Kfir Asraf

et al.

JMIR Formative Research, Journal Year: 2025, Volume and Issue: 9, P. e68347 - e68347

Published: Jan. 6, 2025

Cognitive assessment is an important component of applied psychology, but limited access and high costs make these evaluations challenging. This study aimed to examine the feasibility using large language models (LLMs) create personalized artificial intelligence-based verbal comprehension tests (AI-BVCTs) for assessing intelligence, in contrast with traditional methods based on standardized norms. We used a within-participants design, comparing scores obtained from AI-BVCTs those Wechsler Adult Intelligence Scale (WAIS-III) index (VCI). In total, 8 Hebrew-speaking participants completed both VCI AI-BVCT, latter being generated LLM Claude. The concordance correlation coefficient (CCC) demonstrated strong agreement between AI-BVCT (Claude: CCC=.75, 90% CI 0.266-0.933; GPT-4: CCC=.73, 0.170-0.935). Pearson correlations further supported findings, showing associations r=.84, P<.001; r=.77, P=.02). No statistically significant differences were found (P>.05). These findings support potential LLMs assess intelligence. attests promise AI-based cognitive increasing accessibility affordability processes, enabling testing. research also raises ethical concerns regarding privacy overreliance AI clinical work. Further larger more diverse samples needed establish validity reliability this approach develop accurate scoring procedures.

Language: Английский

Citations

0

The externalization of internal experiences in psychotherapy through generative artificial intelligence: a theoretical, clinical, and ethical analysis DOI Creative Commons
Yuval Haber,

Dorit Hadar Shoval,

Inbar Levkovich

et al.

Frontiers in Digital Health, Journal Year: 2025, Volume and Issue: 7

Published: Feb. 4, 2025

Introduction Externalization techniques are well established in psychotherapy approaches, including narrative therapy and cognitive behavioral therapy. These methods elicit internal experiences such as emotions make them tangible through external representations. Recent advances generative artificial intelligence (GenAI), specifically large language models (LLMs), present new possibilities for therapeutic interventions; however, their integration into core practices remains largely unexplored. This study aimed to examine the clinical, ethical, theoretical implications of integrating GenAI space a proof-of-concept (POC) AI-driven externalization techniques, while emphasizing essential role human therapist. Methods To this end, we developed two customized GPTs agents: VIVI (visual externalization), which uses DALL-E 3 create images reflecting patients' (e.g., depression or hope), DIVI (dialogic role-play-based simulates conversations with aspects content. tools were implemented evaluated clinical case under professional psychological guidance. Results The demonstrated that can serve an “artificial third”, creating Winnicottian playful enhances, rather than supplants, dyadic therapist-patient relationship. successfully externalized complex dynamics, offering avenues, also revealing challenges empathic failures cultural biases. Discussion findings highlight both promise ethical complexities AI-enhanced therapy, concerns about data security, representation accuracy, balance authority. address these challenges, propose SAFE-AI protocol, clinicians structured guidelines responsible AI Future research should systematically evaluate generalizability, efficacy, across diverse populations contexts.

Language: Английский

Citations

0

Value Promotion Scheme Elicitation Using Natural Language Processing: A Model for Value-Based Agent Architecture DOI

Sara García-Rodríguez,

Marcelo Karanik,

Alicia Pina-Zapata

et al.

Lecture notes in computer science, Journal Year: 2025, Volume and Issue: unknown, P. 104 - 120

Published: Jan. 1, 2025

Language: Английский

Citations

0

What does AI consider praiseworthy? DOI
Andrew Jerel Peterson

AI and Ethics, Journal Year: 2025, Volume and Issue: unknown

Published: March 21, 2025

Language: Английский

Citations

0

Embedded values-like shape ethical reasoning of large language models on primary care ethical dilemmas DOI Creative Commons
Dorit Hadar‐Shoval, Kfir Asraf, Shiri Shinan‐Altman

et al.

Heliyon, Journal Year: 2024, Volume and Issue: 10(18), P. e38056 - e38056

Published: Sept. 1, 2024

Language: Английский

Citations

2