
Опубликована: Дек. 21, 2024
Язык: Английский
Опубликована: Дек. 21, 2024
Язык: Английский
JMIR Mental Health, Год журнала: 2024, Номер 11, С. e58011 - e58011
Опубликована: Июль 24, 2024
Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides sociohistorical perspective for theme issue "Responsible Design, Integration, Use Generative AI in Mental Health." It evaluates ethical considerations using generative artificial intelligence (GenAI) democratization mental health knowledge practice. explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as language models. The highlights why represent new phase movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing intelligence-based applications. tool both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient-centered.
Язык: Английский
Процитировано
12JMIR Mental Health, Год журнала: 2025, Номер 12, С. e70439 - e70439
Опубликована: Янв. 6, 2025
Abstract Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration deliberation perhaps governance. This is the first published theme issue focused on GenAI health. It brings together evidence insights GenAI’s capabilities, such as emotion recognition, therapy-session summarization, risk assessment, while highlighting sensitive nature of health data need rigorous validation. Contributors discuss how bias, alignment with human values, transparency, empathy must be carefully addressed to ensure ethically grounded, intelligence–assisted care. By proposing conceptual frameworks; best practices; regulatory approaches, including ethics care preservation socially important humanistic elements, this underscores that can complement, rather than replace, vital role clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, technologists essential.
Язык: Английский
Процитировано
2JMIR Mental Health, Год журнала: 2025, Номер 12, С. e60432 - e60432
Опубликована: Фев. 21, 2025
Background Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. Objective We aimed to provide comprehensive overview of considerations surrounding therapist individuals with issues. Methods conducted systematic search across PubMed, Embase, APA PsycINFO, Web Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our comprised 3 elements: embodied intelligence, ethics, health. defined conversational agent that interacts person uses formulate output. included articles discussing challenges functioning role added additional through snowball searching. English or Dutch. All types were considered except abstracts symposia. Screening eligibility was done by 2 independent researchers (MRM TS AvB). An initial charting form created based on expected revised complemented during process. The divided into themes. When concern occurred more than articles, we identified it distinct theme. Results 101 which 95% (n=96) published 2018 later. Most reviews (n=22, 21.8%) followed commentaries (n=17, 16.8%). following 10 themes distinguished: (1) safety harm (discussed 52/101, 51.5% articles); most common topics within this theme suicidality crisis management, harmful wrong suggestions, risk dependency CAI; (2) explicability, transparency, trust (n=26, 25.7%), including effects “black box” algorithms trust; (3) responsibility accountability (n=31, 30.7%); (4) empathy humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), inequalities due differences literacy; (6) anthropomorphization deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy confidentiality (n=62, 61.4%); (10) concerns care workers’ jobs (n=16, 15.8%). Other discussed 9.9% (n=10) articles. Conclusions scoping review has comprehensively covered aspects While certain remain underexplored stakeholders’ perspectives insufficiently represented, study highlights critical areas further research. These include evaluating risks benefits comparison human therapists, determining its appropriate roles therapeutic contexts impact access, addressing accountability. Addressing these gaps can inform normative analysis guide development guidelines responsible
Язык: Английский
Процитировано
1Applied Psychology Health and Well-Being, Год журнала: 2024, Номер unknown
Опубликована: Ноя. 4, 2024
In recent years, artificial intelligence (AI) chatbots have made significant strides in generating human-like conversations. With AI's expanding capabilities mimicking human interactions, its affordability and accessibility underscore the potential of AI to facilitate negative emotional disclosure or venting. The study's primary objective is highlight benefits AI-assisted venting by comparing effectiveness through a traditional journaling platform reducing affect increasing perceived social support. We conducted pre-registered within-subject experiment involving 150 participants who completed both conditions with counterbalancing wash-out period 1-week between conditions. Results from frequentist Bayesian dependent samples t-test revealed that effectively reduced high medium arousal such as anger, frustration fear. However, condition did not experience increase support loneliness, suggesting perceive effective assistance This study demonstrates promising role improving individuals' well-being, serving catalyst for broader discussion on evolving psychological implications.
Язык: Английский
Процитировано
3Frontiers in Digital Health, Год журнала: 2025, Номер 7
Опубликована: Фев. 4, 2025
Introduction Externalization techniques are well established in psychotherapy approaches, including narrative therapy and cognitive behavioral therapy. These methods elicit internal experiences such as emotions make them tangible through external representations. Recent advances generative artificial intelligence (GenAI), specifically large language models (LLMs), present new possibilities for therapeutic interventions; however, their integration into core practices remains largely unexplored. This study aimed to examine the clinical, ethical, theoretical implications of integrating GenAI space a proof-of-concept (POC) AI-driven externalization techniques, while emphasizing essential role human therapist. Methods To this end, we developed two customized GPTs agents: VIVI (visual externalization), which uses DALL-E 3 create images reflecting patients' (e.g., depression or hope), DIVI (dialogic role-play-based simulates conversations with aspects content. tools were implemented evaluated clinical case under professional psychological guidance. Results The demonstrated that can serve an “artificial third”, creating Winnicottian playful enhances, rather than supplants, dyadic therapist-patient relationship. successfully externalized complex dynamics, offering avenues, also revealing challenges empathic failures cultural biases. Discussion findings highlight both promise ethical complexities AI-enhanced therapy, concerns about data security, representation accuracy, balance authority. address these challenges, propose SAFE-AI protocol, clinicians structured guidelines responsible AI Future research should systematically evaluate generalizability, efficacy, across diverse populations contexts.
Язык: Английский
Процитировано
0Journal of Economic Surveys, Год журнала: 2025, Номер unknown
Опубликована: Апрель 4, 2025
ABSTRACT Over the past decade, demand for medical services has increased, with implications levels of care. Healthcare organizations have sought to improve their response users’ needs and questions making use chatbots that leverage artificial intelligence (AI), paying little attention building an empathic relationship can emotionally match chatbot's responses asked (prompts). This article provides a systematic review marketing literature on prompts in healthcare responsiveness relation emotional aspects. In accordance guidelines recommended by PRISMA framework, five‐step was conducted, starting focus group identify some key terms. Based scientific articles published five years, limitations were identified series propositions theorized. The study identifies benefits future development conversation support strategies more effective empathetic healthcare.
Язык: Английский
Процитировано
0Information Processing & Management, Год журнала: 2025, Номер 62(5), С. 104152 - 104152
Опубликована: Апрель 6, 2025
Язык: Английский
Процитировано
0JMIR Medical Informatics, Год журнала: 2025, Номер 13, С. e65127 - e65127
Опубликована: Май 30, 2025
Abstract Background Cardiovascular and cerebrovascular diseases significantly contribute to global mortality disability. The shift outpatient postoperative care, accelerated by the COVID-19 pandemic, emphasizes need for effective management of outcomes. high rates cardiovascular in Korea necessitate focused transitional care during patient discharge periods. However, limited research exists on experiences discharged patients, underscoring necessity establishing evidence-based services optimize care. Objective objective this paper was analyze emotional patients who underwent surgeries using data from Naver, a major South Korean web portal. Methods Posts were collected specific keywords processed with Bidirectional Encoder Representations Transformers (KoBERT) model based Transformer, which classified sentiments into positive, neutral, negative categories. Model performance validated according precision, recall, F 1 -score, support. Sentiment analysis conducted within Transitional Care (TCM) framework, divided 5 domains: health status, resources, demand, interaction, mental state. Results KoBERT demonstrated classification performance, achieving precision 96%, recall 94%, an -score 94%. revealed that compared surgery experienced higher emotions regarding whereas expressed more demands. Conclusions Different groups experience distinct practical challenges postdischarge. Particularly, TCM framework highlight require robust rehabilitation caregiver support, better cost management. These findings underscore importance personalized strategies tailored diseases. insights derived study can guide policymakers designing targeted patient-centered interventions improve postdischarge ensuring continuous
Язык: Английский
Процитировано
0Опубликована: Ноя. 11, 2024
The study explored the potential of large language models (LLMs) (advanced AI trained to understand and generate human-like language) like ChatGPT personalized narratives about perinatal mental health concerns wellbeing. With affecting many women technology playing a role in addressing their challenges, investigated LLMs' ability craft relatable stories considering emotional nuances, cultural sensitivity, narrative variation. Of 45 generated, 85% (n=38) adhered prompts. Qualitative analysis identified challenges with relatability, such as repetitive content diminished empathy compared user-generated stories. However, despite limitations, AI-generated showcased for raising awareness
Язык: Английский
Процитировано
1Опубликована: Дек. 21, 2024
Язык: Английский
Процитировано
0