
Published: Dec. 21, 2024
Language: Английский
Published: Dec. 21, 2024
Language: Английский
BMC Psychiatry, Journal Year: 2025, Volume and Issue: 25(1)
Published: Feb. 14, 2025
The integration of artificial intelligence in mental health care represents a transformative shift the identification, treatment, and management disorders. This systematic review explores diverse applications intelligence, emphasizing both its benefits associated challenges. A comprehensive literature search was conducted across multiple databases based on Preferred Reporting Items for Systematic Reviews Meta-Analyses, including ProQuest, PubMed, Scopus, Persian databases, resulting 2,638 initial records. After removing duplicates applying strict selection criteria, 15 articles were included analysis. findings indicate that AI enhances early detection intervention conditions. Various studies highlighted effectiveness AI-driven tools, such as chatbots predictive modeling, improving patient engagement tailoring interventions. Notably, tools like Wysa app demonstrated significant improvements user-reported symptoms. However, ethical considerations regarding data privacy algorithm transparency emerged critical While reviewed generally positive trend applications, some methodologies exhibited moderate quality, suggesting room improvement. Involving stakeholders creation technologies is essential building trust tackling issues. Future should aim to enhance methods investigate their applicability various populations. underscores potential revolutionize through enhanced accessibility personalized careful consideration implications methodological rigor ensure responsible deployment this sensitive field.
Language: Английский
Citations
2JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e58011 - e58011
Published: July 24, 2024
Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides sociohistorical perspective for theme issue "Responsible Design, Integration, Use Generative AI in Mental Health." It evaluates ethical considerations using generative artificial intelligence (GenAI) democratization mental health knowledge practice. explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as language models. The highlights why represent new phase movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing intelligence-based applications. tool both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient-centered.
Language: Английский
Citations
10JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e70439 - e70439
Published: Jan. 6, 2025
Abstract Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration deliberation perhaps governance. This is the first published theme issue focused on GenAI health. It brings together evidence insights GenAI’s capabilities, such as emotion recognition, therapy-session summarization, risk assessment, while highlighting sensitive nature of health data need rigorous validation. Contributors discuss how bias, alignment with human values, transparency, empathy must be carefully addressed to ensure ethically grounded, intelligence–assisted care. By proposing conceptual frameworks; best practices; regulatory approaches, including ethics care preservation socially important humanistic elements, this underscores that can complement, rather than replace, vital role clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, technologists essential.
Language: Английский
Citations
1JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e69294 - e69294
Published: Jan. 17, 2025
Abstract Background Mental health disorders significantly impact global populations, prompting the rise of digital mental interventions, such as artificial intelligence (AI)-powered chatbots, to address gaps in access care. This review explores potential for a “digital therapeutic alliance (DTA),” emphasizing empathy, engagement, and alignment with traditional principles enhance user outcomes. Objective The primary objective this was identify key concepts underlying DTA AI-driven psychotherapeutic interventions health. secondary propose an initial definition based on these identified concepts. Methods PRISMA (Preferred Reporting Items Systematic Reviews Meta-Analyses) scoping reviews Tavares de Souza’s integrative methodology were followed, encompassing systematic literature searches Medline, Web Science, PsycNet, Google Scholar. Data from eligible studies extracted analyzed using Horvath et al’s conceptual framework alliance, focusing goal alignment, task agreement, bond, quality assessed Newcastle-Ottawa Scale Cochrane Risk Bias Tool. Results A total 28 pool 1294 articles after excluding duplicates ineligible studies. These informed development DTA, elements facilitators barriers affecting primarily focused AI-powered psychotherapy, other tools. Conclusions findings provide foundational concept report its replicate mechanisms trust, collaboration While shows promise enhancing accessibility engagement care, further research innovation are needed challenges personalization, ethical concerns, long-term impact.
Language: Английский
Citations
1Published: April 24, 2025
Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools mental well-being. With the increased integration of GenAI, it is important to examine individuals' attitudes trust in GenAI chatbots' support SA. Through a mixed-method approach that involved surveys (n = 159) interviews 17), we found individuals with severe symptoms tended embrace chatbots more readily, valuing non-judgmental perceived emotional comprehension. However, those milder prioritized technical reliability. We identified factors influencing trust, such ability generate empathetic responses its context-sensitive limitations, which were particularly among also discuss design implications use fostering cognitive practical considerations.
Language: Английский
Citations
0Frontiers in Digital Health, Journal Year: 2025, Volume and Issue: 7
Published: Feb. 4, 2025
Introduction Externalization techniques are well established in psychotherapy approaches, including narrative therapy and cognitive behavioral therapy. These methods elicit internal experiences such as emotions make them tangible through external representations. Recent advances generative artificial intelligence (GenAI), specifically large language models (LLMs), present new possibilities for therapeutic interventions; however, their integration into core practices remains largely unexplored. This study aimed to examine the clinical, ethical, theoretical implications of integrating GenAI space a proof-of-concept (POC) AI-driven externalization techniques, while emphasizing essential role human therapist. Methods To this end, we developed two customized GPTs agents: VIVI (visual externalization), which uses DALL-E 3 create images reflecting patients' (e.g., depression or hope), DIVI (dialogic role-play-based simulates conversations with aspects content. tools were implemented evaluated clinical case under professional psychological guidance. Results The demonstrated that can serve an “artificial third”, creating Winnicottian playful enhances, rather than supplants, dyadic therapist-patient relationship. successfully externalized complex dynamics, offering avenues, also revealing challenges empathic failures cultural biases. Discussion findings highlight both promise ethical complexities AI-enhanced therapy, concerns about data security, representation accuracy, balance authority. address these challenges, propose SAFE-AI protocol, clinicians structured guidelines responsible AI Future research should systematically evaluate generalizability, efficacy, across diverse populations contexts.
Language: Английский
Citations
0Published: April 3, 2025
Mental healthcare in a range of countries faces challenges, including rapidly increasing demand at time restricted access to services, insufficient mental professionals and limited funding. This can result long delays late diagnosis. The use artificial intelligence (AI) technology help address these shortcomings is therefore being explored countries, the UK. recent increase reported studies provides an opportunity review potential, benefits drawbacks this technology. Studies have included AI-based chatbots for patients with depression anxiety symptoms; AI-facilitated approaches, virtual reality applications disorders; avatar therapy psychosis; AI humanoid robot-enhanced therapy, both children isolated elderly care settings; animal-like robots dementia; digital game interventions young people health conditions. Overall, showed positive effects none any adverse side effects. However, quality data was low, mainly due lack studies, high risk bias heterogeneity studies. Importantly also, longer-term were often not evident. suggests that translating small-scale, short-term trials into effective large-scale, real-world may be particular challenge. While appears its also raises important ethical privacy concerns, potential bias, unintended consequences such as over-diagnosis or unnecessary treatment normal emotional experiences. More robust, research larger patient populations, clear regulatory frameworks guidelines ensure patients’ rights, well-being are protected, needed.
Language: Английский
Citations
0SSM - Mental Health, Journal Year: 2024, Volume and Issue: 6, P. 100373 - 100373
Published: Nov. 16, 2024
Language: Английский
Citations
1Published: Dec. 21, 2024
Language: Английский
Citations
0