Conversational AI in Pediatric Mental Health: A Narrative Review DOI Creative Commons
Masab Mansoor, Ali Hamide, Tyler Tran

и другие.

Children, Год журнала: 2025, Номер 12(3), С. 359 - 359

Опубликована: Март 14, 2025

Background/Objectives: Mental health disorders among children and adolescents represent a significant global challenge, with approximately 50% of conditions emerging before age 14. Despite substantial investment in services, persistent barriers such as provider shortages, stigma, accessibility issues continue to limit effective care delivery. This narrative review examines the application conversational artificial intelligence (AI) pediatric mental contexts, mapping current evidence base, identifying therapeutic mechanisms, exploring unique developmental considerations required for implementation. Methods: We searched multiple electronic databases (PubMed/MEDLINE, PsycINFO, ACM Digital Library, IEEE Xplore, Scopus) literature published between January 2010 February 2025 that addressed AI applications relevant health. employed synthesis approach thematic analysis organize findings across technological approaches, applications, considerations, implementation ethical frameworks. Results: The identified promising health, particularly common like anxiety depression, psychoeducation, skills practice, bridging traditional care. However, most robust empirical research has focused on adult populations, only beginning receive dedicated investigation. Key mechanisms include reduced self-disclosure, cognitive change, emotional validation, behavioral activation. Developmental emerged fundamental challenges, necessitating age-appropriate adaptations cognitive, emotional, linguistic, dimensions rather than simple modifications adult-oriented systems. Conclusions: Conversational potential address unmet needs complement to, replacement for, human-delivered Future should prioritize longitudinal outcomes, science, safety monitoring, equity-focused design. Interdisciplinary collaboration involving families is essential ensure these technologies effectively young people while mitigating risks.

Язык: Английский

Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions – A Narrative Review for a Comprehensive Insight DOI Creative Commons
Ahmed M. Alhuwaydi

Risk Management and Healthcare Policy, Год журнала: 2024, Номер Volume 17, С. 1339 - 1348

Опубликована: Май 1, 2024

Abstract: Mental health is an essential component of the and well-being a person community, it critical for individual, society, socio-economic development any country. healthcare currently in sector transformation era, with emerging technologies such as artificial intelligence (AI) reshaping screening, diagnosis, treatment modalities psychiatric illnesses. The present narrative review aimed at discussing current landscape role AI mental healthcare, including treatment. Furthermore, this attempted to highlight key challenges, limitations, prospects providing based on existing works literature. literature search was obtained from PubMed, Saudi Digital Library (SDL), Google Scholar, Web Science, IEEE Xplore, we included only English-language articles published last five years. Keywords used combination Boolean operators ("AND" "OR") were following: "Artificial intelligence", "Machine learning", Deep "Early diagnosis", "Treatment", "interventions", "ethical consideration", "mental Healthcare". Our revealed that, equipped predictive analytics capabilities, can improve planning by predicting individual's response various interventions. Predictive analytics, which uses historical data formulate preventative interventions, aligns move toward individualized preventive healthcare. In screening diagnostic domains, subset AI, machine learning deep learning, has been proven analyze sets predict patterns associated problems. However, limited studies have evaluated collaboration between professionals delivering these sensitive problems require empathy, human connections, holistic, personalized, multidisciplinary approaches. Ethical issues, cybersecurity, lack diversity, cultural sensitivity, language barriers remain concerns implementing futuristic approach Considering approaches, imperative explore aspects. Therefore, future comparative trials larger sample sizes are warranted evaluate different models across regions fill knowledge gaps. Keywords: intelligence, early interventions

Язык: Английский

Процитировано

24

Ethical Considerations in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible Implementation and Impact DOI Creative Commons
Hamid Reza Saeidnia,

Seyed Ghasem Hashemi Fotami,

Brady Lund

и другие.

Social Sciences, Год журнала: 2024, Номер 13(7), С. 381 - 381

Опубликована: Июль 22, 2024

AI has the potential to revolutionize mental health services by providing personalized support and improving accessibility. However, it is crucial address ethical concerns ensure responsible beneficial outcomes for individuals. This systematic review examines considerations surrounding implementation impact of artificial intelligence (AI) interventions in field well-being. To a comprehensive analysis, we employed structured search strategy across top academic databases, including PubMed, PsycINFO, Web Science, Scopus. The scope encompassed articles published from 2014 2024, resulting 51 relevant articles. identifies 18 key considerations, 6 associated with using wellbeing (privacy confidentiality, informed consent, bias fairness, transparency accountability, autonomy human agency, safety efficacy); 5 principles development technologies settings practice positive (ethical framework, stakeholder engagement, review, mitigation, continuous evaluation improvement); 7 practices, guidelines, recommendations promoting use (adhere transparency, prioritize data privacy security, mitigate involve stakeholders, conduct regular reviews, monitor evaluate outcomes). highlights importance By addressing privacy, bias, oversight, evaluation, can that like chatbots AI-enabled medical devices are developed deployed an ethically sound manner, respecting individual rights, maximizing benefits while minimizing harm.

Язык: Английский

Процитировано

19

Utilizing natural language processing and large language models in the diagnosis and prediction of infectious diseases: A systematic review DOI
Mahmud Omar, Dana Brin, Benjamin S. Glicksberg

и другие.

American Journal of Infection Control, Год журнала: 2024, Номер 52(9), С. 992 - 1001

Опубликована: Апрель 6, 2024

Язык: Английский

Процитировано

12

Large Language Models in Mental Health Care: A Systematic Scoping Review (Preprint) DOI
Yining Hua, Fenglin Liu, Kailai Yang

и другие.

Опубликована: Июль 8, 2024

BACKGROUND The integration of large language models (LLMs) in mental health care is an emerging field. There a need to systematically review the application outcomes and delineate advantages limitations clinical settings. OBJECTIVE This aims provide comprehensive overview use LLMs care, assessing their efficacy, challenges, potential for future applications. METHODS A systematic search was conducted across multiple databases including PubMed, Web Science, Google Scholar, arXiv, medRxiv, PsyArXiv November 2023. All forms original research, peer-reviewed or not, published disseminated between October 1, 2019, December 2, 2023, are included without restrictions if they used developed after T5 directly addressed research questions RESULTS From initial pool 313 articles, 34 met inclusion criteria based on relevance LLM robustness reported outcomes. Diverse applications identified, diagnosis, therapy, patient engagement enhancement, etc. Key challenges include data availability reliability, nuanced handling states, effective evaluation methods. Despite successes accuracy accessibility improvement, gaps applicability ethical considerations were evident, pointing robust data, standardized evaluations, interdisciplinary collaboration. CONCLUSIONS hold substantial promise enhancing care. For full be realized, emphasis must placed developing datasets, development frameworks, guidelines, collaborations address current limitations.

Язык: Английский

Процитировано

12

Applications of large language models in psychiatry: a systematic review DOI Creative Commons
Mahmud Omar, Shelly Soffer, Alexander W. Charney

и другие.

Frontiers in Psychiatry, Год журнала: 2024, Номер 15

Опубликована: Июнь 24, 2024

Background With their unmatched ability to interpret and engage with human language context, large models (LLMs) hint at the potential bridge AI cognitive processes. This review explores current application of LLMs, such as ChatGPT, in field psychiatry. Methods We followed PRISMA guidelines searched through PubMed, Embase, Web Science, Scopus, up until March 2024. Results From 771 retrieved articles, we included 16 that directly examine LLMs’ use particularly ChatGPT GPT-4, showed diverse applications clinical reasoning, social media, education within They can assist diagnosing mental health issues, managing depression, evaluating suicide risk, supporting field. However, our also points out limitations, difficulties complex cases underestimation risks. Conclusion Early research psychiatry reveals versatile applications, from diagnostic support educational roles. Given rapid pace advancement, future investigations are poised explore extent which these might redefine traditional roles care.

Язык: Английский

Процитировано

11

Regulating AI in Mental Health: Ethics of Care Perspective DOI Creative Commons

Tamar Tavory

JMIR Mental Health, Год журнала: 2024, Номер 11, С. e58493 - e58493

Опубликована: Июль 20, 2024

This article contends that the responsible artificial intelligence (AI) approach-which is dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks impact of AI on human relationships. Focusing only principles reinforces a narrow concept accountability responsibility companies developing AI. proposes applying care to regulation can offer more comprehensive framework addresses AI's dual essential for effective in domain mental health care. The delves into emergence new "therapeutic" area facilitated by AI-based bots, which operate without therapist. highlights difficulties involved, mainly absence defined duty toward users, shows how implementing establish clear responsibilities developers. It also sheds light potential emotional manipulation risks involved. In conclusion, series considerations grounded developmental process AI-powered therapeutic tools.

Язык: Английский

Процитировано

8

Assessing ChatGPT’s Accuracy and Reliability in Asthma General Knowledge: Implications for Artificial Intelligence Use in Public Health Education DOI
Muhammad Thesa Ghozali

Journal of Asthma, Год журнала: 2025, Номер unknown, С. 1 - 9

Опубликована: Янв. 8, 2025

Integrating Artificial Intelligence (AI) into public health education represents a pivotal advancement in medical knowledge dissemination, particularly for chronic diseases such as asthma. This study assesses the accuracy and comprehensiveness of ChatGPT, conversational AI model, providing asthma-related information. Employing rigorous mixed-methods approach, healthcare professionals evaluated ChatGPT's responses to Asthma General Knowledge Questionnaire Adults (AGKQA), standardized instrument covering various topics. Responses were graded completeness analyzed using statistical tests assess reproducibility consistency. ChatGPT showed notable proficiency conveying asthma knowledge, with flawless success etiology pathophysiology categories substantial medication information (70%). However, limitations noted medication-related responses, where mixed (30%) highlights need further refinement capabilities ensure reliability critical areas education. Reproducibility analysis demonstrated consistent 100% rate across all categories, affirming delivering uniform Statistical analyses underscored stability reliability. These findings underscore promise valuable educational tool while emphasizing necessity ongoing improvements address observed limitations, regarding

Язык: Английский

Процитировано

1

AI in the Classroom: Insights from Educators on Usage, Challenges, and Mental Health DOI Creative Commons
Julie A. Delello, Woonhee Sung, Kouider Mokhtari

и другие.

Education Sciences, Год журнала: 2025, Номер 15(2), С. 113 - 113

Опубликована: Янв. 21, 2025

This study examines educators’ perceptions of artificial intelligence (AI) in educational settings, focusing on their familiarity with AI tools, integration into teaching practices, professional development needs, the influence institutional policies, and impacts mental health. Survey responses from 353 educators across various levels countries revealed that 92% respondents are familiar AI, utilizing it to enhance efficiency streamline administrative tasks. Notably, many reported students using tools like ChatGPT for assignments, prompting adaptations methods promote critical thinking reduce dependency. Some saw AI’s potential stress through automation but others raised concerns about increased anxiety social isolation reduced interpersonal interactions. highlights a gap leading some establish own guidelines, particularly matters such as data privacy plagiarism. Furthermore, identified significant need focused literacy ethical considerations. study’s findings suggest necessity longitudinal studies explore long-term effects outcomes health underscore importance incorporating student perspectives thorough understanding role education.

Язык: Английский

Процитировано

1

“It happened to be the perfect thing”: experiences of generative AI chatbots for mental health DOI Creative Commons

Steven Siddals,

John Torous, Astrid Coxon

и другие.

npj Mental Health Research, Год журнала: 2024, Номер 3(1)

Опубликована: Окт. 27, 2024

Abstract The global mental health crisis underscores the need for accessible, effective interventions. Chatbots based on generative artificial intelligence (AI), like ChatGPT, are emerging as novel solutions, but research real-life usage is limited. We interviewed nineteen individuals about their experiences using AI chatbots health. Participants reported high engagement and positive impacts, including better relationships healing from trauma loss. developed four themes: (1) a sense of ‘ emotional sanctuary’ , (2) insightful guidance’ particularly relationships, (3) joy connection ’, (4) comparisons between therapist ’ human therapy. Some themes echoed prior rule-based chatbots, while others seemed to AI. emphasised safety guardrails, human-like memory ability lead therapeutic process. Generative may offer support that feels meaningful users, further needed effectiveness.

Язык: Английский

Процитировано

7

Assessing the Accuracy of Generative Conversational Artificial Intelligence in Debunking Sleep Health Myths: Mixed Methods Comparative Study With Expert Analysis DOI Creative Commons
Nicola Luigi Bragazzi, Sergio Garbarino

JMIR Formative Research, Год журнала: 2024, Номер 8, С. e55762 - e55762

Опубликована: Март 14, 2024

Adequate sleep is essential for maintaining individual and public health, positively affecting cognition well-being, reducing chronic disease risks. It plays a significant role in driving the economy, safety, managing health care costs. Digital tools, including websites, trackers, apps, are key promoting education. Conversational artificial intelligence (AI) such as ChatGPT (OpenAI, Microsoft Corp) offers accessible, personalized advice on but raises concerns about potential misinformation. This underscores importance of ensuring that AI-driven information accurate, given its impact spread sleep-related myths.

Язык: Английский

Процитировано

6