Is the Algorithm Good in a Bad World, or Has It Learned to be Bad? The Ethical Challenges of “Locked” Versus “Continuously Learning” and “Autonomous” Versus “Assistive” AI Tools in Healthcare DOI
Alaa Youssef, Michael D. Abràmoff, Danton Char

et al.

The American Journal of Bioethics, Journal Year: 2023, Volume and Issue: 23(5), P. 43 - 45

Published: May 2, 2023

Click to increase image sizeClick decrease sizeThis article refers to:Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? Additional informationFundingFunding for this commentary was provided by Stanford Human-Centered Seed Grant.

Language: Английский

Considering the Role of Human Empathy in AI-Driven Therapy DOI Creative Commons
Matan Rubin, Hadar Arnon, Jonathan D. Huppert

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e56529 - e56529

Published: April 23, 2024

Recent breakthroughs in artificial intelligence (AI) language models have elevated the vision of using conversational AI support for mental health, with a growing body literature indicating varying degrees efficacy. In this paper, we ask when, therapy, it will be easier to replace humans and, conversely, what instances, human connection still more valued. We suggest that empathy lies at heart answer question. First, define different aspects and outline potential empathic capabilities versus AI. Next, consider determines when these are needed most both from perspective therapeutic methodology patient objectives. Ultimately, our goal is prompt further investigation dialogue, urging practitioners scholars engaged AI-mediated therapy keep questions considerations mind investigating implementation health.

Language: Английский

Citations

15

The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis DOI Creative Commons
Andrea Ferrario, Jana Sedláková, Manuel Trachsel

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e56569 - e56569

Published: April 27, 2024

Abstract Large language model (LLM)–powered services are gaining popularity in various applications due to their exceptional performance many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring potential use digital health contexts, particularly the mental domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect of CAI for individuals with issues, focusing on case patients depression: tendency humanize lack contextualized robustness. Our approach is interdisciplinary, relying considerations from philosophy, psychology, computer science. We argue humanization hinges reflection what it means simulate “human-like” features LLMs role these systems should play interactions humans. Further, ensuring contextualization robustness requires considering specificities production depression, well its evolution over time. Finally, provide a series recommendations foster responsible design deployment therapeutic support depression.

Language: Английский

Citations

9

The ethics of personalised digital duplicates: a minimally viable permissibility principle DOI Creative Commons
John Danaher, Sven Nyholm

AI and Ethics, Journal Year: 2024, Volume and Issue: unknown

Published: July 15, 2024

Abstract With recent technological advances, it is possible to create personalised digital duplicates. These are partial, at least semi-autonomous, recreations of real people in form. Should such duplicates be created? When can they used? This article develops a general framework for thinking about the ethics It starts by clarifying object inquiry– themselves– defining them, giving examples, and justifying focus on them rather than other kinds artificial being. then identifies set generic harms benefits associated with uses this as basis formulating minimally viable permissible principle (MVPP) that stipulates widely agreeable conditions should met order creation use ethically permissible. concludes assessing whether those practice, more or less

Language: Английский

Citations

9

Regulating AI in Mental Health: Ethics of Care Perspective DOI Creative Commons

Tamar Tavory

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e58493 - e58493

Published: July 20, 2024

This article contends that the responsible artificial intelligence (AI) approach-which is dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks impact of AI on human relationships. Focusing only principles reinforces a narrow concept accountability responsibility companies developing AI. proposes applying care to regulation can offer more comprehensive framework addresses AI's dual essential for effective in domain mental health care. The delves into emergence new "therapeutic" area facilitated by AI-based bots, which operate without therapist. highlights difficulties involved, mainly absence defined duty toward users, shows how implementing establish clear responsibilities developers. It also sheds light potential emotional manipulation risks involved. In conclusion, series considerations grounded developmental process AI-powered therapeutic tools.

Language: Английский

Citations

9

Introducing CounseLLMe: A dataset of simulated mental health dialogues for comparing LLMs like Haiku, LLaMAntino and ChatGPT against humans DOI Creative Commons
Edoardo Sebastiano De Duro, Riccardo Improta, Massimo Stella

et al.

Emerging Trends in Drugs Addictions and Health, Journal Year: 2025, Volume and Issue: unknown, P. 100170 - 100170

Published: Jan. 1, 2025

Language: Английский

Citations

1

The Integration of Artificial Intelligence-Powered Psychotherapy Chatbots in Pediatric Care: Scaffold or Substitute? DOI
Bryanna Moore, Jonathan Herington, Şerife Tekin

et al.

The Journal of Pediatrics, Journal Year: 2025, Volume and Issue: unknown, P. 114509 - 114509

Published: Feb. 1, 2025

Language: Английский

Citations

1

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review DOI Creative Commons
Mehrdad Rahsepar Meadi, Tomas Sillekens, Suzanne Metselaar

et al.

JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e60432 - e60432

Published: Feb. 21, 2025

Background Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. Objective We aimed to provide comprehensive overview of considerations surrounding therapist individuals with issues. Methods conducted systematic search across PubMed, Embase, APA PsycINFO, Web Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our comprised 3 elements: embodied intelligence, ethics, health. defined conversational agent that interacts person uses formulate output. included articles discussing challenges functioning role added additional through snowball searching. English or Dutch. All types were considered except abstracts symposia. Screening eligibility was done by 2 independent researchers (MRM TS AvB). An initial charting form created based on expected revised complemented during process. The divided into themes. When concern occurred more than articles, we identified it distinct theme. Results 101 which 95% (n=96) published 2018 later. Most reviews (n=22, 21.8%) followed commentaries (n=17, 16.8%). following 10 themes distinguished: (1) safety harm (discussed 52/101, 51.5% articles); most common topics within this theme suicidality crisis management, harmful wrong suggestions, risk dependency CAI; (2) explicability, transparency, trust (n=26, 25.7%), including effects “black box” algorithms trust; (3) responsibility accountability (n=31, 30.7%); (4) empathy humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), inequalities due differences literacy; (6) anthropomorphization deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy confidentiality (n=62, 61.4%); (10) concerns care workers’ jobs (n=16, 15.8%). Other discussed 9.9% (n=10) articles. Conclusions scoping review has comprehensively covered aspects While certain remain underexplored stakeholders’ perspectives insufficiently represented, study highlights critical areas further research. These include evaluating risks benefits comparison human therapists, determining its appropriate roles therapeutic contexts impact access, addressing accountability. Addressing these gaps can inform normative analysis guide development guidelines responsible

Language: Английский

Citations

1

AI as the Therapist: Student Insights on the Challenges of Using Generative AI for School Mental Health Frameworks DOI Creative Commons
Cecilia Ka Yuk Chan

Behavioral Sciences, Journal Year: 2025, Volume and Issue: 15(3), P. 287 - 287

Published: Feb. 28, 2025

The integration of generative AI (GenAI) in school-based mental health services presents new opportunities and challenges. This study focuses on the challenges using GenAI chatbots as therapeutic tools by exploring secondary school students’ perceptions such applications. data were collected from students who had both theoretical practical experience with GenAI. Based Grodniewicz Hohol’s framework highlighting “Problem a Confused Therapist”, Non-human Narrowly Intelligent qualitative student reflections examined thematic analysis. findings revealed that while acknowledged AI’s benefits, accessibility non-judgemental feedback, they expressed significant concerns about lack empathy, trust, adaptability. implications underscore need for chatbot use to be complemented in-person counselling, emphasising importance human oversight AI-augmented care. contributes deeper understanding how advanced can ethically effectively incorporated into frameworks, balancing technological potential essential interaction.

Language: Английский

Citations

1

The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems DOI Creative Commons
Liana Spytska

BMC Psychology, Journal Year: 2025, Volume and Issue: 13(1)

Published: Feb. 28, 2025

The increasing demand for psychotherapy and limited access to specialists underscore the potential of artificial intelligence (AI) in mental health care. This study evaluates effectiveness AI-powered Friend chatbot providing psychological support during crisis situations, compared traditional psychotherapy. A randomized controlled trial was conducted with 104 women diagnosed anxiety disorders active war zones. Participants were randomly assigned two groups: experimental group used daily support, while control received 60-minute sessions three times a week. Anxiety levels assessed using Hamilton Rating Scale Beck Inventory. T-tests analyze results. Both groups showed significant reductions levels. receiving therapy had 45% reduction on scale 50% scale, 30% 35% group. While provided accessible, immediate proved more effective due emotional depth adaptability by human therapists. particularly beneficial settings where therapists limited, proving its value scalability availability. However, engagement notably lower in-person therapy. offers scalable, cost-effective solution situations may not be accessible. Although remains reducing anxiety, hybrid model combining AI interaction could optimize care, especially underserved areas or emergencies. Further research is needed improve AI's responsiveness adaptability.

Language: Английский

Citations

1

Therapeutic Chatbots as Cognitive-Affective Artifacts DOI Creative Commons
J. P. Grodniewicz, Mateusz Hohol

Topoi, Journal Year: 2024, Volume and Issue: 43(3), P. 795 - 807

Published: April 6, 2024

Abstract Conversational Artificial Intelligence (CAI) systems (also known as AI “chatbots”) are among the most promising examples of use technology in mental health care. With already millions users worldwide, CAI is likely to change landscape psychological help. Most researchers agree that existing CAIs not “digital therapists” and using them a substitute for psychotherapy delivered by human. But if they therapists, what they, role can play care? To answer these questions, we appeal two well-established widely discussed concepts: cognitive affective artifacts. Cognitive artifacts artificial devices contributing functionally performance task. Affective objects which have capacity alter subjects’ state. We argue therapeutic kind cognitive-affective contribute positive (i) simulating (quasi-)therapeutic interaction, (ii) supporting tasks, (iii) altering condition their users. This sheds new light on why virtually all implement principles techniques Behavioral Therapy — orientation according and, ultimately, mediated change. Simultaneously, it allows us conceptualize better potential limitations applying technologies therapy.

Language: Английский

Citations

8