Data Security and Privacy Considerations in Mental Health Settings DOI
Liangshun Wu

Advances in medical technologies and clinical practice book series, Journal Year: 2024, Volume and Issue: unknown, P. 237 - 266

Published: Dec. 27, 2024

The integration of chatbots in mental health services necessitates a close examination data security and privacy issues. This chapter explores these concerns psychology psychiatry, highlighting the unique challenges opportunities posed by digital tools protecting sensitive patient information. authors analyze current vulnerabilities threats, drawing on case studies breaches to emphasize need for robust protection measures. Legal ethical considerations, including HIPAA GDPR, are reviewed underscore importance compliance. proposes strategies enhancing security, such as secure communication protocols regular audits. goal is provide professionals, cybersecurity experts, policymakers, researchers with practical guidelines ensure use services, balancing innovation confidentiality.

Language: Английский

AI Chatbots and Cognitive Control: Enhancing Executive Functions Through Chatbot Interactions: A Systematic Review DOI Creative Commons
Pantelis Pergantis, Victoria Bamicha,

Charalampos Skianis

et al.

Brain Sciences, Journal Year: 2025, Volume and Issue: 15(1), P. 47 - 47

Published: Jan. 6, 2025

Background/Objectives: The evolution of digital technology enhances the broadening a person's intellectual growth. Research points out that implementing innovative applications world improves human social, cognitive, and metacognitive behavior. Artificial intelligence chatbots are yet another human-made construct. These forms software simulate conversation, understand process user input, provide personalized responses. Executive function includes set higher mental processes necessary for formulating, planning, achieving goal. present study aims to investigate executive reinforcement through artificial chatbots, outlining potentials, limitations, future research suggestions. Specifically, examined three questions: use conversational in functioning training, their impact on executive-cognitive skills, duration any improvements. Methods: assessment existing literature was implemented using systematic review method, according PRISMA 2020 Principles. avalanche search method employed conduct source following databases: Scopus, Web Science, PubMed, complementary Google Scholar. This included studies from 2021 experimental, observational, or mixed methods. It AI-based conversationalists support functions, such as anxiety, stress, depression, memory, attention, cognitive load, behavioral changes. In addition, this general populations with specific neurological conditions, all peer-reviewed, written English, full-text access. However, excluded before 2021, reviews, non-AI-based conversationalists, not targeting range skills abilities, without open criteria aligned objectives, ensuring focus AI agents function. initial collection totaled n = 115 articles; however, eligibility requirements led final selection 10 studies. Results: findings suggested positive effects enhance improve skills. Although, several limitations were identified, making it still difficult generalize reproduce effects. Conclusions: an tool can assistant learning expanding contributing metacognitive, social development individual. its training is at primary stage. highlighted need unified framework reference studies, better designs, diverse populations, larger sample sizes participants, longitudinal observe long-term use.

Language: Английский

Citations

5

The good, the bad, and the GPT: Reviewing the impact of generative artificial intelligence on psychology DOI
Mohammed Salah, Fadi Abdelfattah, Hussam Al Halbusi

et al.

Current Opinion in Psychology, Journal Year: 2024, Volume and Issue: 59, P. 101872 - 101872

Published: Aug. 23, 2024

Language: Английский

Citations

5

Balancing Ethics and Opportunities: The Role of AI in Psychotherapy and Counselling DOI Creative Commons
Alexandra Bloch‐Atefi

Deleted Journal, Journal Year: 2025, Volume and Issue: unknown

Published: March 2, 2025

The integration of artificial intelligence (AI) into psychotherapy and counselling practices presents both significant opportunities ethical challenges. AI applications, such as automated scheduling, AI-assisted note taking, client engagement tools, predictive analytics for needs, are transforming the delivery mental health care. However, considerations surrounding AI’s role in care complex multifaceted, necessitating careful scrutiny. This paper discusses required responsible implementation technologies counselling, focusing on data protection, consent, preservation therapeutic relationship. It argues that professional bodies educational institutions must collaborate to develop dynamic, adaptable guidelines ensure safe effective use tools. Furthermore, it emphasises need robust protection mechanisms safeguard sensitive information proposes strategies balance benefits with human connection

Language: Английский

Citations

0

Too Perfect to Be Like a Human? Ethical Challenges of Emotional AI in Health care DOI
Andrew Tsz Wan Hung

Fudan Journal of the Humanities and Social Sciences, Journal Year: 2025, Volume and Issue: unknown

Published: April 2, 2025

Language: Английский

Citations

0

The Efficacy of Conversational Artificial Intelligence in Rectifying the Theory of Mind and Autonomy Biases: Comparative Analysis (Preprint) DOI Creative Commons
Marcin Rządeczka, Anna Sterna, Julia Stolińska

et al.

JMIR Mental Health, Journal Year: 2024, Volume and Issue: 12, P. e64396 - e64396

Published: Oct. 29, 2024

The increasing deployment of conversational artificial intelligence (AI) in mental health interventions necessitates an evaluation their efficacy rectifying cognitive biases and recognizing affect human-AI interactions. These are particularly relevant contexts as they can exacerbate conditions such depression anxiety by reinforcing maladaptive thought patterns or unrealistic expectations This study aimed to assess the effectiveness therapeutic chatbots (Wysa Youper) versus general-purpose language models (GPT-3.5, GPT-4, Gemini Pro) identifying user used constructed case scenarios simulating typical user-bot interactions examine how effectively address selected biases. assessed included theory-of-mind (anthropomorphism, overtrust, attribution) autonomy (illusion control, fundamental attribution error, just-world hypothesis). Each chatbot response was evaluated based on accuracy, quality, adherence behavioral therapy principles using ordinal scale ensure consistency scoring. To enhance reliability, responses underwent a double review process 2 scientists, followed secondary clinical psychologist specializing therapy, ensuring robust assessment across interdisciplinary perspectives. revealed that outperformed biases, overtrust bias, hypothesis. GPT-4 achieved highest scores all whereas bot Wysa scored lowest. Notably, bots showed more consistent accuracy adaptability addressing bias-related cues different contexts, suggesting broader flexibility handling complex patterns. In addition, recognition tasks, not only excelled but also demonstrated quicker adaptation subtle emotional nuances, outperforming 67% (4/6) tested shows that, while hold promise for support bias intervention, current capabilities limited. Addressing AI-human requires systems both rectify analyze integral human cognition, promoting precision empathy. findings reveal need improved simulated design provide adaptive, personalized reduce overreliance encourage independent coping skills. Future research should focus enhancing affective mechanisms ethical concerns mitigation data privacy safe, effective AI-based support.

Language: Английский

Citations

1

Integrating Artificial Intelligence (AI) Chatbots for Depression Management: A New Frontier in Primary Care DOI Open Access
Haroon Latif Khan, Syed Faqeer Hussain Bokhari

Cureus, Journal Year: 2024, Volume and Issue: unknown

Published: Aug. 14, 2024

Depression is a prevalent mental health disorder that significantly impacts primary care settings. This editorial explores the potential of artificial intelligence (AI)-powered chatbots in managing depression within environments. AI offer innovative solutions to challenges faced by healthcare providers, including limited appointment times, delayed access specialists, and stigma associated with issues. These digital tools provide continuous support, personalized interactions, early symptom detection, potentially improving accessibility outcomes management. The integration presents opportunities for round-the-clock patient interventions, reduction stigma. However, persist, concerns about assessment accuracy, data privacy, existing systems. Successful implementation requires systematic approaches, stakeholder engagement, comprehensive training providers. Ethical considerations, such as ensuring informed consent, algorithmic biases, maintaining human element care, are crucial responsible deployment. As technology evolves, future directions may include enhanced natural language processing, multimodal integration, AI-augmented clinical decision support. emphasizes need balanced approach leverages while acknowledging its limitations irreplaceable value judgment management

Language: Английский

Citations

0

Disrupted self, therapy, and the limits of conversational AI DOI Creative Commons
Dina Babushkina, Bas de Boer

Philosophical Psychology, Journal Year: 2024, Volume and Issue: unknown, P. 1 - 27

Published: Sept. 3, 2024

Conversational agents (CA) are thought to be promising for psychotherapy because they give the impression of being able engage in conversations with human users. However, given high risk therapy patients who already a vulnerable situation, there is need investigate extent which CA contribute goals and discuss CA's limitations, especially complex cases. In this paper, we understand as way dealing existential situations position CAs context therapeutic experience patients. This determined by patient's unique personal specific goals. We suggest that fundamentally dialogical activity, it crucially involves work on self one's self-narrative. brings us our central question: possible productive dialogue, their limitations epistemic agents? will several those show how these undermine possibility engaging illustrate through discussions cases grief abuse.

Language: Английский

Citations

0

Data Security and Privacy Considerations in Mental Health Settings DOI
Liangshun Wu

Advances in medical technologies and clinical practice book series, Journal Year: 2024, Volume and Issue: unknown, P. 237 - 266

Published: Dec. 27, 2024

The integration of chatbots in mental health services necessitates a close examination data security and privacy issues. This chapter explores these concerns psychology psychiatry, highlighting the unique challenges opportunities posed by digital tools protecting sensitive patient information. authors analyze current vulnerabilities threats, drawing on case studies breaches to emphasize need for robust protection measures. Legal ethical considerations, including HIPAA GDPR, are reviewed underscore importance compliance. proposes strategies enhancing security, such as secure communication protocols regular audits. goal is provide professionals, cybersecurity experts, policymakers, researchers with practical guidelines ensure use services, balancing innovation confidentiality.

Language: Английский

Citations

0