
Published: Dec. 21, 2024
Language: Английский
Published: Dec. 21, 2024
Language: Английский
JMIR Mental Health, Journal Year: 2023, Volume and Issue: 11, P. e54369 - e54369
Published: Dec. 25, 2023
Mentalization, which is integral to human cognitive processes, pertains the interpretation of one's own and others' mental states, including emotions, beliefs, intentions. With advent artificial intelligence (AI) prominence large language models in health applications, questions persist about their aptitude emotional comprehension. The prior iteration model from OpenAI, ChatGPT-3.5, demonstrated an advanced capacity interpret emotions textual data, surpassing benchmarks. Given introduction ChatGPT-4, with its enhanced visual processing capabilities, considering Google Bard's existing functionalities, a rigorous assessment proficiency mentalizing warranted.
Language: Английский
Citations
59BMC Psychology, Journal Year: 2025, Volume and Issue: 13(1)
Published: Feb. 28, 2025
The increasing demand for psychotherapy and limited access to specialists underscore the potential of artificial intelligence (AI) in mental health care. This study evaluates effectiveness AI-powered Friend chatbot providing psychological support during crisis situations, compared traditional psychotherapy. A randomized controlled trial was conducted with 104 women diagnosed anxiety disorders active war zones. Participants were randomly assigned two groups: experimental group used daily support, while control received 60-minute sessions three times a week. Anxiety levels assessed using Hamilton Rating Scale Beck Inventory. T-tests analyze results. Both groups showed significant reductions levels. receiving therapy had 45% reduction on scale 50% scale, 30% 35% group. While provided accessible, immediate proved more effective due emotional depth adaptability by human therapists. particularly beneficial settings where therapists limited, proving its value scalability availability. However, engagement notably lower in-person therapy. offers scalable, cost-effective solution situations may not be accessible. Although remains reducing anxiety, hybrid model combining AI interaction could optimize care, especially underserved areas or emergencies. Further research is needed improve AI's responsiveness adaptability.
Language: Английский
Citations
7JMIR Mental Health, Journal Year: 2024, Volume and Issue: 11, P. e58011 - e58011
Published: July 24, 2024
Knowledge has become more open and accessible to a large audience with the "democratization of information" facilitated by technology. This paper provides sociohistorical perspective for theme issue "Responsible Design, Integration, Use Generative AI in Mental Health." It evaluates ethical considerations using generative artificial intelligence (GenAI) democratization mental health knowledge practice. explores historical context democratizing information, transitioning from restricted access widespread availability due internet, open-source movements, most recently, GenAI technologies such as language models. The highlights why represent new phase movement, offering unparalleled highly advanced technology well information. In realm health, this requires delicate nuanced deliberation. Including may allow, among other things, improved accessibility care, personalized responses, conceptual flexibility, could facilitate flattening traditional hierarchies between care providers patients. At same time, it also entails significant risks challenges that must be carefully addressed. To navigate these complexities, proposes strategic questionnaire assessing intelligence-based applications. tool both benefits risks, emphasizing need balanced approach integration health. calls cautious yet positive advocating active engagement professionals guiding development. emphasizes importance ensuring advancements are not only technologically sound but ethically grounded patient-centered.
Language: Английский
Citations
12JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e70439 - e70439
Published: Jan. 6, 2025
Abstract Generative artificial intelligence (GenAI) shows potential for personalized care, psychoeducation, and even crisis prediction in mental health, yet responsible use requires ethical consideration deliberation perhaps governance. This is the first published theme issue focused on GenAI health. It brings together evidence insights GenAI’s capabilities, such as emotion recognition, therapy-session summarization, risk assessment, while highlighting sensitive nature of health data need rigorous validation. Contributors discuss how bias, alignment with human values, transparency, empathy must be carefully addressed to ensure ethically grounded, intelligence–assisted care. By proposing conceptual frameworks; best practices; regulatory approaches, including ethics care preservation socially important humanistic elements, this underscores that can complement, rather than replace, vital role clinical settings. To achieve this, an ongoing collaboration between researchers, clinicians, policy makers, technologists essential.
Language: Английский
Citations
2Medical Sciences, Journal Year: 2025, Volume and Issue: 13(1), P. 8 - 8
Published: Jan. 11, 2025
Depression poses significant challenges to global healthcare systems and impacts the quality of life individuals their family members. Recent advancements in artificial intelligence (AI) have had a transformative impact on diagnosis treatment depression. These innovations potential significantly enhance clinical decision-making processes improve patient outcomes settings. AI-powered tools can analyze extensive data—including medical records, genetic information, behavioral patterns—to identify early warning signs depression, thereby enhancing diagnostic accuracy. By recognizing subtle indicators that traditional assessments may overlook, these enable providers make timely precise decisions are crucial preventing onset or escalation depressive episodes. In terms treatment, AI algorithms assist personalizing therapeutic interventions by predicting effectiveness various approaches for individual patients based unique characteristics history. This includes recommending tailored plans consider patient’s specific symptoms. Such personalized strategies aim optimize overall efficiency healthcare. theoretical review uniquely synthesizes current evidence applications primary care depression management, offering comprehensive analysis both personalization capabilities. Alongside advancements, we also address conflicting findings field presence biases necessitate important limitations.
Language: Английский
Citations
2European Journal of Investigation in Health Psychology and Education, Journal Year: 2025, Volume and Issue: 15(1), P. 9 - 9
Published: Jan. 18, 2025
Large language models (LLMs) offer promising possibilities in mental health, yet their ability to assess disorders and recommend treatments remains underexplored. This quantitative cross-sectional study evaluated four LLMs (Gemini 2.0 Flash Experimental), Claude (Claude 3.5 Sonnet), ChatGPT-3.5, ChatGPT-4) using text vignettes representing conditions such as depression, suicidal ideation, early chronic schizophrenia, social phobia, PTSD. Each model’s diagnostic accuracy, treatment recommendations, predicted outcomes were compared with norms established by health professionals. Findings indicated that for certain conditions, including depression PTSD, like ChatGPT-4 achieved higher accuracy human However, more complex cases, LLM performance varied, achieving only 55% while other professionals performed better. tended suggest a broader range of proactive treatments, whereas recommended targeted psychiatric consultations specific medications. In terms outcome predictions, generally optimistic regarding full recovery, especially treatment, lower recovery rates partial rates, particularly untreated cases. While range, conservative highlight the need professional oversight. provide valuable support diagnostics planning but cannot replace discretion.
Language: Английский
Citations
2Family Relations, Journal Year: 2025, Volume and Issue: unknown
Published: April 3, 2025
Abstract Objective Although still in its infancy, research shows promise that artificial intelligence (AI) models can be integrated into relationship interventions, and the potential benefits are substantial. This article articulates challenges opportunities for developing interventions integrate AI. Background After defining AI differentiating machine learning from deep learning, we review key concepts strategies related to AI, specifically natural language processing, interpretability, human‐in‐the‐loop strategies, as approaches needed develop interventions. Method We explore how is currently family life literature has served foundation further integrating The use of therapy contexts examined, identify ethical need addressed this technology develops. Results examine using focusing on four areas: diagnosis problems, providing autonomous treatment, predicting successful treatment outcomes (prognosis), biomarkers monitor client reactions. Opportunities explored include development data‐efficient training methods, creating interpretable focused relationships, integration clinical expertise during model development, combining biomarker data with other modalities. Conclusion Despite obstacles, provide families personalized support strengthen bonds overcome relational challenges. Implications emerging intersection science pioneer innovative solutions diverse needs.
Language: Английский
Citations
1JMIR Mental Health, Journal Year: 2025, Volume and Issue: 12, P. e60432 - e60432
Published: Feb. 21, 2025
Background Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns. Objective We aimed to provide comprehensive overview of considerations surrounding therapist individuals with issues. Methods conducted systematic search across PubMed, Embase, APA PsycINFO, Web Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our comprised 3 elements: embodied intelligence, ethics, health. defined conversational agent that interacts person uses formulate output. included articles discussing challenges functioning role added additional through snowball searching. English or Dutch. All types were considered except abstracts symposia. Screening eligibility was done by 2 independent researchers (MRM TS AvB). An initial charting form created based on expected revised complemented during process. The divided into themes. When concern occurred more than articles, we identified it distinct theme. Results 101 which 95% (n=96) published 2018 later. Most reviews (n=22, 21.8%) followed commentaries (n=17, 16.8%). following 10 themes distinguished: (1) safety harm (discussed 52/101, 51.5% articles); most common topics within this theme suicidality crisis management, harmful wrong suggestions, risk dependency CAI; (2) explicability, transparency, trust (n=26, 25.7%), including effects “black box” algorithms trust; (3) responsibility accountability (n=31, 30.7%); (4) empathy humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), inequalities due differences literacy; (6) anthropomorphization deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy confidentiality (n=62, 61.4%); (10) concerns care workers’ jobs (n=16, 15.8%). Other discussed 9.9% (n=10) articles. Conclusions scoping review has comprehensively covered aspects While certain remain underexplored stakeholders’ perspectives insufficiently represented, study highlights critical areas further research. These include evaluating risks benefits comparison human therapists, determining its appropriate roles therapeutic contexts impact access, addressing accountability. Addressing these gaps can inform normative analysis guide development guidelines responsible
Language: Английский
Citations
1PeerJ, Journal Year: 2024, Volume and Issue: 12, P. e17468 - e17468
Published: May 29, 2024
The aim of this study was to evaluate the effectiveness ChatGPT-3.5 and ChatGPT-4 in incorporating critical risk factors, namely history depression access weapons, into suicide assessments. Both models assessed using scenarios that featured individuals with without a weapons. estimated likelihood suicidal thoughts, attempts, serious suicide-related mortality on Likert scale. A multivariate three-way ANOVA analysis Bonferroni post hoc tests conducted examine impact forementioned independent factors (history weapons) these outcome variables. identified as significant factor. demonstrated more nuanced understanding relationship between depression, risk. In contrast, displayed limited insight complex relationship. consistently assigned higher severity ratings variables than did ChatGPT-3.5. highlights potential two models, particularly ChatGPT-4, enhance assessment by considering factors.
Language: Английский
Citations
5JMIR Formative Research, Journal Year: 2025, Volume and Issue: 9, P. e68347 - e68347
Published: Jan. 6, 2025
Cognitive assessment is an important component of applied psychology, but limited access and high costs make these evaluations challenging. This study aimed to examine the feasibility using large language models (LLMs) create personalized artificial intelligence-based verbal comprehension tests (AI-BVCTs) for assessing intelligence, in contrast with traditional methods based on standardized norms. We used a within-participants design, comparing scores obtained from AI-BVCTs those Wechsler Adult Intelligence Scale (WAIS-III) index (VCI). In total, 8 Hebrew-speaking participants completed both VCI AI-BVCT, latter being generated LLM Claude. The concordance correlation coefficient (CCC) demonstrated strong agreement between AI-BVCT (Claude: CCC=.75, 90% CI 0.266-0.933; GPT-4: CCC=.73, 0.170-0.935). Pearson correlations further supported findings, showing associations r=.84, P<.001; r=.77, P=.02). No statistically significant differences were found (P>.05). These findings support potential LLMs assess intelligence. attests promise AI-based cognitive increasing accessibility affordability processes, enabling testing. research also raises ethical concerns regarding privacy overreliance AI clinical work. Further larger more diverse samples needed establish validity reliability this approach develop accurate scoring procedures.
Language: Английский
Citations
0