Is the Algorithm Good in a Bad World, or Has It Learned to be Bad? The Ethical Challenges of “Locked” Versus “Continuously Learning” and “Autonomous” Versus “Assistive” AI Tools in Healthcare DOI
Alaa Youssef, Michael D. Abràmoff, Danton Char

et al.

The American Journal of Bioethics, Journal Year: 2023, Volume and Issue: 23(5), P. 43 - 45

Published: May 2, 2023

Click to increase image sizeClick decrease sizeThis article refers to:Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? Additional informationFundingFunding for this commentary was provided by Stanford Human-Centered Seed Grant.

Language: Английский

Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation DOI Creative Commons
Elizabeth C. Stade, Shannon Wiltsey Stirman, Lyle Ungar

et al.

npj Mental Health Research, Journal Year: 2024, Volume and Issue: 3(1)

Published: April 2, 2024

Abstract Large language models (LLMs) such as Open AI’s GPT-4 (which power ChatGPT) and Google’s Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about applications is mounting in the field well industry. These developments promise address insufficient mental healthcare system capacity scale individual access personalized treatments. However, clinical psychology an uncommonly high stakes application domain for AI systems, responsible evidence-based therapy requires nuanced expertise. This paper provides a roadmap ambitious yet of LLMs First, technical overview presented. Second, stages integration into psychotherapy are discussed while highlighting parallels development autonomous vehicle technology. Third, care, training, research discussed, areas risk given complex nature Fourth, recommendations evaluation provided, which include centering science, involving robust interdisciplinary collaboration, attending issues like assessment, detection, transparency, bias. Lastly, vision outlined how might enable new generation studies interventions at scale, these may challenge assumptions

Language: Английский

Citations

66

Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots DOI Creative Commons
Zoha Khawaja, Jean‐Christophe Bélisle‐Pipon

Frontiers in Digital Health, Journal Year: 2023, Volume and Issue: 5

Published: Nov. 8, 2023

Artificial intelligence (AI)-powered chatbots have the potential to substantially increase access affordable and effective mental health services by supplementing work of clinicians. Their 24/7 availability accessibility through a mobile phone allow individuals obtain help whenever wherever needed, overcoming financial logistical barriers. Although psychological AI ability make significant improvements in providing care services, they do not come without ethical technical challenges. Some major concerns include inadequate or harmful support, exploiting vulnerable populations, potentially producing discriminatory advice due algorithmic bias. However, it is always obvious for users fully understand nature relationship with chatbots. There can be misunderstandings about exact purpose chatbot, particularly terms expectations, adapt particularities responsiveness needs resources/treatments that offered. Hence, imperative are aware limited therapeutic enjoy when interacting Ignorance misunderstanding such limitations role may lead misconception (TM) where user would underestimate restrictions technologies overestimate their provide actual support guidance. TM raises exacerbate one's contributing global crisis. This paper will explore various ways which occur inaccurate marketing these chatbots, forming digital alliance them, receiving bias design algorithm, inability foster autonomy patients.

Language: Английский

Citations

50

The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy DOI Creative Commons
Yuval Haber, Inbar Levkovich, Dorit Hadar‐Shoval

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e54781 - e54781

Published: April 18, 2024

This paper explores a significant shift in the field of mental health general and psychotherapy particular following generative artificial intelligence's new capabilities processing generating humanlike language. Following Freud, this lingo-technological development is conceptualized as "fourth narcissistic blow" that science inflicts on humanity. We argue blow has potentially dramatic influence perceptions human society, interrelationships, self. should, accordingly, expect changes therapeutic act emergence what we term third psychotherapy. The introduction an marks critical juncture, prompting us to ask important core questions address two basic elements thinking, namely, transparency autonomy: (1) What presence therapy relationships? (2) How does it reshape our perception ourselves interpersonal dynamics? (3) remains irreplaceable at therapy? Given ethical implications arise from these questions, proposes can be valuable asset when applied with insight consideration, enhancing but not replacing touch therapy.

Language: Английский

Citations

24

Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care DOI
Meghan E. Hurley, Benjamin Lang, Kristin M. Kostick

et al.

The American Journal of Bioethics, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 13

Published: Sept. 17, 2024

Given the need for enforceable guardrails artificial intelligence (AI) that protect public and allow innovation, U.S. Government recently issued a Blueprint an AI Bill of Rights which outlines five principles safe design, use, implementation. One in particular, right to notice explanation, requires accurately informing about use impacts them ways are easy understand. Yet, healthcare setting, it is unclear what goal explanation serves, moral importance patient-level disclosure. We propose three normative functions this right: (1) notify patients their care, (2) educate promote trust, (3) meet standards informed consent. Additional clarity needed guide practices respect while providing meaningful benefits patients.

Language: Английский

Citations

23

The usefulness of ChatGPT for psychotherapists and patients DOI Creative Commons
Paolo Raile

Humanities and Social Sciences Communications, Journal Year: 2024, Volume and Issue: 11(1)

Published: Jan. 4, 2024

ChatGPT is a chatbot based on large language model. Its application possibilities are extensive, and it freely accessible to all people, including psychotherapists individuals with mental illnesses. Some blog posts about the possible use of as psychotherapist or supplement psychotherapy already exist. Based three detailed chats, author analyzed chatbot's responses seeking assistance, patients looking for support between sessions, during their psychotherapists' vacations, people suffering from illnesses who not yet in psychotherapy. The results suggest that offers an interesting complement easily accessible, good (and currently free) place go mental-health problems have sought professional help no psychotherapeutic experience. information is, however, one-sided, any future regulation AI must also be made clear proposals only insufficient substitute, but bias favors certain methods while even mentioning other approaches may more helpful some people.

Language: Английский

Citations

20

The Double-Edged Sword of Anthropomorphism in LLMs DOI Creative Commons
Madeline G. Reinecke, Fransisca Ting, Julian Savulescu

et al.

Published: Feb. 26, 2025

Humans may have evolved to be "hyperactive agency detectors". Upon hearing a rustle in pile of leaves, it would safer assume that an agent, like lion, hides beneath (even if there ultimately nothing there). Can this evolutionary cognitive mechanism—and related mechanisms anthropomorphism—explain some people's contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In paper, we sketch how such engender the seemingly irresistible anthropomorphism large language-based chatbots. We then explore implications within educational context. Specifically, argue tendency perceive "mind machine" is double-edged sword for progress: Though can facilitate motivation and learning, also lead students trust—and potentially over-trust—content generated by To sure, do seem recognize LLM-generated content may, at times, inaccurate. argue, however, rise towards will only serve further camouflage these inaccuracies. close considering research turn aiding becoming digitally literate—avoiding pitfalls caused perceiving humanlike mental states

Language: Английский

Citations

2

Evidence, ethics and the promise of artificial intelligence in psychiatry DOI Creative Commons
Melissa D. McCradden, Katrina Hui, Daniel Z. Buchman

et al.

Journal of Medical Ethics, Journal Year: 2022, Volume and Issue: 49(8), P. 573 - 579

Published: Dec. 29, 2022

Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories psychiatric disorders, as well provide treatments, is appealing given the historical challenges with prediction, diagnosis treatment in psychiatry. Given power analyse vast amounts information, some clinicians may feel obligated align their clinical judgements outputs system. However, a potential epistemic privileging lead unintended consequences could negatively affect patient treatment, well-being rights. implications also relevant precision medicine, digital twin technologies predictive analytics generally. We propose commitment humility help promote judicious decision-making at interface big data

Language: Английский

Citations

48

Using Artificial Intelligence to Enhance Ongoing Psychological Interventions for Emotional Problems in Real- or Close to Real-Time: A Systematic Review DOI Open Access
Patricia Gual-Montolio, Irene Jaén, Verónica Martínez‐Borba

et al.

International Journal of Environmental Research and Public Health, Journal Year: 2022, Volume and Issue: 19(13), P. 7737 - 7737

Published: June 24, 2022

Emotional disorders are the most common mental globally. Psychological treatments have been found to be useful for a significant number of cases, but up 40% patients do not respond psychotherapy as expected. Artificial intelligence (AI) methods might enhance by providing therapists and with real- or close real-time recommendations according patient's response treatment. The goal this investigation is systematically review evidence on use AI-based outcomes in psychological interventions real-time. search included studies indexed electronic databases Scopus, Pubmed, Web Science, Cochrane Library. terms used variations words "psychotherapy", "artificial intelligence", "emotional disorders". From 85 full texts assessed, only 10 met our eligibility criteria. In these, frequently AI technique was conversational agents, which chatbots based software that can accessed online computer smartphone. Overall, reviewed investigations indicated positive consequences using reduce clinical symptomatology. Additionally, reported high satisfaction, engagement, retention rates when implementing Despite potential make more flexible tailored patients' needs, methodologically robust needed.

Language: Английский

Citations

39

Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence DOI Creative Commons
J. P. Grodniewicz, Mateusz Hohol

Frontiers in Psychiatry, Journal Year: 2023, Volume and Issue: 14

Published: June 1, 2023

Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about feasibility psychotherapeutic interventions based on interactions Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement human-delivered psychotherapy, it is not yet capable delivering fully fledged psychotherapy its own. The goal this paper to investigate what are most important obstacles our way developing systems in future. To end, we formulate and discuss three challenges central quest. Firstly, might able develop effective AI-based unless deepen understanding makes effective. Secondly, assuming requires building therapeutic relationship, clear whether delivered by non-human agents. Thirdly, conducting problem too complicated narrow AI, i.e., AI proficient dealing only relatively simple well-delineated tasks. If case, should expect fully-fledged until so-called "general" or "human-like" developed. While believe all these ultimately overcome, think being mindful them crucial ensure well-balanced steady progress path psychotherapy.

Language: Английский

Citations

37

Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values DOI Creative Commons
Dorit Hadar‐Shoval, Kfir Asraf, Yonathan Mizrachi

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e55988 - e55988

Published: March 8, 2024

Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs guide decision-making have ethical importance. Schwartz's theory of basic (STBV) provides a framework quantifying cultural value orientations and has shown utility examining in contexts, including cultural, diagnostic, therapist-client dynamics.

Language: Английский

Citations

15