Опубликована: Июль 10, 2024
Язык: Английский
Опубликована: Июль 10, 2024
Язык: Английский
JMIR Mental Health, Год журнала: 2024, Номер 11, С. e54781 - e54781
Опубликована: Апрель 18, 2024
This paper explores a significant shift in the field of mental health general and psychotherapy particular following generative artificial intelligence's new capabilities processing generating humanlike language. Following Freud, this lingo-technological development is conceptualized as "fourth narcissistic blow" that science inflicts on humanity. We argue blow has potentially dramatic influence perceptions human society, interrelationships, self. should, accordingly, expect changes therapeutic act emergence what we term third psychotherapy. The introduction an marks critical juncture, prompting us to ask important core questions address two basic elements thinking, namely, transparency autonomy: (1) What presence therapy relationships? (2) How does it reshape our perception ourselves interpersonal dynamics? (3) remains irreplaceable at therapy? Given ethical implications arise from these questions, proposes can be valuable asset when applied with insight consideration, enhancing but not replacing touch therapy.
Язык: Английский
Процитировано
27Discover Internet of Things, Год журнала: 2025, Номер 5(1)
Опубликована: Янв. 13, 2025
Язык: Английский
Процитировано
4JMIR Mental Health, Год журнала: 2024, Номер 11, С. e57400 - e57400
Опубликована: Сен. 3, 2024
Background Large language models (LLMs) are advanced artificial neural networks trained on extensive datasets to accurately understand and generate natural language. While they have received much attention demonstrated potential in digital health, their application mental particularly clinical settings, has generated considerable debate. Objective This systematic review aims critically assess the use of LLMs specifically focusing applicability efficacy early screening, interventions, settings. By systematically collating assessing evidence from current studies, our work analyzes models, methodologies, data sources, outcomes, thereby highlighting challenges present, prospects for use. Methods Adhering PRISMA (Preferred Reporting Items Systematic Reviews Meta-Analyses) guidelines, this searched 5 open-access databases: MEDLINE (accessed by PubMed), IEEE Xplore, Scopus, JMIR, ACM Digital Library. Keywords used were (mental health OR illness disorder psychiatry) AND (large models). study included articles published between January 1, 2017, April 30, 2024, excluded languages other than English. Results In total, 40 evaluated, including 15 (38%) conditions suicidal ideation detection through text analysis, 7 (18%) as conversational agents, 18 (45%) applications evaluations health. show good effectiveness detecting issues providing accessible, destigmatized eHealth services. However, assessments also indicate that risks associated with might surpass benefits. These include inconsistencies text; production hallucinations; absence a comprehensive, benchmarked ethical framework. Conclusions examines inherent risks. The identifies several issues: lack multilingual annotated experts, concerns regarding accuracy reliability content, interpretability due “black box” nature LLMs, ongoing dilemmas. clear, framework; privacy issues; overreliance both physicians patients, which could compromise traditional medical practices. As result, should not be considered substitutes professional rapid development underscores valuable aids, emphasizing need continued research area. Trial Registration PROSPERO CRD42024508617; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=508617
Язык: Английский
Процитировано
18JMIR Mental Health, Год журнала: 2024, Номер 11, С. e58493 - e58493
Опубликована: Июль 20, 2024
This article contends that the responsible artificial intelligence (AI) approach-which is dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks impact of AI on human relationships. Focusing only principles reinforces a narrow concept accountability responsibility companies developing AI. proposes applying care to regulation can offer more comprehensive framework addresses AI's dual essential for effective in domain mental health care. The delves into emergence new "therapeutic" area facilitated by AI-based bots, which operate without therapist. highlights difficulties involved, mainly absence defined duty toward users, shows how implementing establish clear responsibilities developers. It also sheds light potential emotional manipulation risks involved. In conclusion, series considerations grounded developmental process AI-powered therapeutic tools.
Язык: Английский
Процитировано
11Frontiers in Digital Health, Год журнала: 2025, Номер 7
Опубликована: Фев. 25, 2025
Pediatric and adolescent/young adult (AYA) cancer patients face profound psychological challenges, exacerbated by limited access to continuous mental health support. While conventional therapeutic interventions often follow structured protocols, the potential of generative artificial intelligence (AI) chatbots provide conversational support remains unexplored. This study evaluates feasibility impact AI in alleviating distress enhancing treatment engagement this vulnerable population. Two age-appropriate chatbots, leveraging GPT-4, were developed natural, empathetic conversations without protocols. Five pediatric AYA participated a two-week intervention, engaging with via messaging platform. Pre- post-intervention anxiety stress levels self-reported, usage patterns analyzed assess chatbots' effectiveness. Four out five participants reported significant reductions post-intervention. Participants engaged chatbot every 2-3 days, sessions lasting approximately 10 min. All noted improved motivation, 80% disclosing personal concerns they had not shared healthcare providers. The 24/7 availability particularly benefited experiencing nighttime anxiety. pilot demonstrates complement traditional services addressing unmet needs patients. findings suggest these tools can serve as accessible, systems. Further large-scale studies are warranted validate promising results.
Язык: Английский
Процитировано
1JMIR Neurotechnology, Год журнала: 2024, Номер unknown
Опубликована: Июль 10, 2024
Язык: Английский
Процитировано
2medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2024, Номер unknown
Опубликована: Июль 17, 2024
Background Suicide risk assessment is a critical skill for mental health professionals (MHPs), yet traditional training in this area often limited. This study examined the potential of generative artificial intelligence (GenAI)-based simulator to enhance self- efficacy suicide among MHPs. Method A quasi-experimental was conducted with 43 MHPs from Israel. Participants attended an online seminar and interacted GenAI-powered simulator. They completed pre- post-intervention questionnaires measuring self-efficacy willingness treat suicidal patients. Qualitative data on user experience were collected. Results We found significant increase scores following intervention. Willingness patients presenting increased slightly but did not reach significance. feedback indicated that participants engaging valuable professional development. However, raised concerns about over-reliance AI need human supervision during training. Conclusion preliminary suggests GenAI-based simulators hold promise as tool MHPs’ competence assessment. further research larger samples control groups needed confirm these findings address ethical considerations surrounding use AI-powered simulation tools have democratize access high-quality health, potentially contributing global prevention efforts. their implementation should be carefully considered ensure they complement rather than replace expertise.
Язык: Английский
Процитировано
1Sustainability, Год журнала: 2024, Номер 16(19), С. 8371 - 8371
Опубликована: Сен. 26, 2024
Environmental entrepreneurship has a vital role in addressing our planet’s critical environmental state by implementing innovative solutions to combat escalating threats. These ventures, however, face numerous challenges, including securing initial funding, navigating technical difficulties, and gaining market acceptance, which are magnified the pioneering nature of green innovations. Social capital is key facilitator, enabling entrepreneurs overcome obstacles through smart network management, trust, strategic partnerships. This study investigates social mitigating challenges faced entrepreneurs. We conducted semi-structured interviews with Our findings reveal how not only assists complexities ingrained but also an inherent part venture creation. insights emphasize importance advancing innovation. Theoretical practical implications discussed.
Язык: Английский
Процитировано
0Опубликована: Ноя. 11, 2024
Язык: Английский
Процитировано
0Опубликована: Дек. 21, 2024
Язык: Английский
Процитировано
0