
Published: Dec. 21, 2024
Language: Английский
Published: Dec. 21, 2024
Language: Английский
JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e58011 - e58011
Published: July 24, 2024
Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides sociohistorical perspective for theme issue "Responsible Design, Integration, Use Generative AI in Mental Health." It evaluates ethical considerations using generative artificial intelligence (GenAI) democratization mental health knowledge practice. explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as language models. The highlights why represent new phase movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing intelligence-based applications. tool both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient-centered.
Language: Английский
Citations
10JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e70439 - e70439
Published: Jan. 6, 2025
Abstract Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration deliberation perhaps governance. This is the first published theme issue focused on GenAI health. It brings together evidence insights GenAI’s capabilities, such as emotion recognition, therapy-session summarization, risk assessment, while highlighting sensitive nature of health data need rigorous validation. Contributors discuss how bias, alignment with human values, transparency, empathy must be carefully addressed to ensure ethically grounded, intelligence–assisted care. By proposing conceptual frameworks; best practices; regulatory approaches, including ethics care preservation socially important humanistic elements, this underscores that can complement, rather than replace, vital role clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, technologists essential.
Language: Английский
Citations
1JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e60432 - e60432
Published: Feb. 21, 2025
Background Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. Objective We aimed to provide comprehensive overview of considerations surrounding therapist individuals with issues. Methods conducted systematic search across PubMed, Embase, APA PsycINFO, Web Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our comprised 3 elements: embodied intelligence, ethics, health. defined conversational agent that interacts person uses formulate output. included articles discussing challenges functioning role added additional through snowball searching. English or Dutch. All types were considered except abstracts symposia. Screening eligibility was done by 2 independent researchers (MRM TS AvB). An initial charting form created based on expected revised complemented during process. The divided into themes. When concern occurred more than articles, we identified it distinct theme. Results 101 which 95% (n=96) published 2018 later. Most reviews (n=22, 21.8%) followed commentaries (n=17, 16.8%). following 10 themes distinguished: (1) safety harm (discussed 52/101, 51.5% articles); most common topics within this theme suicidality crisis management, harmful wrong suggestions, risk dependency CAI; (2) explicability, transparency, trust (n=26, 25.7%), including effects “black box” algorithms trust; (3) responsibility accountability (n=31, 30.7%); (4) empathy humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), inequalities due differences literacy; (6) anthropomorphization deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy confidentiality (n=62, 61.4%); (10) concerns care workers’ jobs (n=16, 15.8%). Other discussed 9.9% (n=10) articles. Conclusions scoping review has comprehensively covered aspects While certain remain underexplored stakeholders’ perspectives insufficiently represented, study highlights critical areas further research. These include evaluating risks benefits comparison human therapists, determining its appropriate roles therapeutic contexts impact access, addressing accountability. Addressing these gaps can inform normative analysis guide development guidelines responsible
Language: Английский
Citations
1Frontiers in Digital Health, Journal Year: 2025, Volume and Issue: 7
Published: Feb. 4, 2025
Introduction Externalization techniques are well established in psychotherapy approaches, including narrative therapy and cognitive behavioral therapy. These methods elicit internal experiences such as emotions make them tangible through external representations. Recent advances generative artificial intelligence (GenAI), specifically large language models (LLMs), present new possibilities for therapeutic interventions; however, their integration into core practices remains largely unexplored. This study aimed to examine the clinical, ethical, theoretical implications of integrating GenAI space a proof-of-concept (POC) AI-driven externalization techniques, while emphasizing essential role human therapist. Methods To this end, we developed two customized GPTs agents: VIVI (visual externalization), which uses DALL-E 3 create images reflecting patients' (e.g., depression or hope), DIVI (dialogic role-play-based simulates conversations with aspects content. tools were implemented evaluated clinical case under professional psychological guidance. Results The demonstrated that can serve an “artificial third”, creating Winnicottian playful enhances, rather than supplants, dyadic therapist-patient relationship. successfully externalized complex dynamics, offering avenues, also revealing challenges empathic failures cultural biases. Discussion findings highlight both promise ethical complexities AI-enhanced therapy, concerns about data security, representation accuracy, balance authority. address these challenges, propose SAFE-AI protocol, clinicians structured guidelines responsible AI Future research should systematically evaluate generalizability, efficacy, across diverse populations contexts.
Language: Английский
Citations
0Applied Psychology Health and Well-Being, Journal Year: 2024, Volume and Issue: unknown
Published: Nov. 4, 2024
In recent years, artificial intelligence (AI) chatbots have made significant strides in generating human-like conversations. With AI's expanding capabilities mimicking human interactions, its affordability and accessibility underscore the potential of AI to facilitate negative emotional disclosure or venting. The study's primary objective is highlight benefits AI-assisted venting by comparing effectiveness through a traditional journaling platform reducing affect increasing perceived social support. We conducted pre-registered within-subject experiment involving 150 participants who completed both conditions with counterbalancing wash-out period 1-week between conditions. Results from frequentist Bayesian dependent samples t-test revealed that effectively reduced high medium arousal such as anger, frustration fear. However, condition did not experience increase support loneliness, suggesting perceive effective assistance This study demonstrates promising role improving individuals' well-being, serving catalyst for broader discussion on evolving psychological implications.
Language: Английский
Citations
3Journal of Economic Surveys, Journal Year: 2025, Volume and Issue: unknown
Published: April 4, 2025
ABSTRACT Over the past decade, demand for medical services has increased, with implications levels of care. Healthcare organizations have sought to improve their response users’ needs and questions making use chatbots that leverage artificial intelligence (AI), paying little attention building an empathic relationship can emotionally match chatbot's responses asked (prompts). This article provides a systematic review marketing literature on prompts in healthcare responsiveness relation emotional aspects. In accordance guidelines recommended by PRISMA framework, five‐step was conducted, starting focus group identify some key terms. Based scientific articles published five years, limitations were identified series propositions theorized. The study identifies benefits future development conversation support strategies more effective empathetic healthcare.
Language: Английский
Citations
0Information Processing & Management, Journal Year: 2025, Volume and Issue: 62(5), P. 104152 - 104152
Published: April 6, 2025
Language: Английский
Citations
0Published: Nov. 11, 2024
The study explored the potential of large language models (LLMs) (advanced AI trained to understand and generate human-like language) like ChatGPT personalized narratives about perinatal mental health concerns wellbeing. With affecting many women technology playing a role in addressing their challenges, investigated LLMs' ability craft relatable stories considering emotional nuances, cultural sensitivity, narrative variation. Of 45 generated, 85% (n=38) adhered prompts. Qualitative analysis identified challenges with relatability, such as repetitive content diminished empathy compared user-generated stories. However, despite limitations, AI-generated showcased for raising awareness
Language: Английский
Citations
1Published: Dec. 21, 2024
Language: Английский
Citations
0