Transforming Perceptions: Exploring the Multifaceted Potential of Generative AI for People with Cognitive Disabilities (Preprint) DOI

Dorit Hadar Souval,

Yuval Haber, Amir Tal

и другие.

Опубликована: Июль 10, 2024

BACKGROUND Background: The emergence of generative artificial intelligence (GenAI) presents unprecedented opportunities to redefine conceptions personhood and cognitive disability, potentially enhancing the inclusion participation individuals with disabilities in society. OBJECTIVE Objective: Explore transformative potential GenAI reshaping perceptions dismantling societal barriers, promoting social for disabilities. METHODS Method: Critical review current literature disability studies, (AI) ethics, computer science, integrating insights from theories philosophy technology. analysis focused on two key aspects: as a mirror reflecting values biases, partner RESULTS Results: article proposes theoretical framework understanding impact disability. It introduces concepts "social mirror" that reflects amplifies "cognitive co-pilot" providing personalized assistance daily tasks, interactions, environmental navigation. also novel protocol developing AI systems tailored needs disabilities, emphasizing user involvement, ethical considerations, need address both challenges posed by GenAI. CONCLUSIONS Conclusions: Although has great empowerment realizing this requires change attitudes development practices. calls interdisciplinary collaboration close partnership community implementation technologies. Implications: Realizing multi-faceted approach. This involves shift attitudes, inclusive practices prioritize perspectives community, ongoing collaboration. emphasizes importance proceeding caution, recognizing complexities risks alongside possibilities

Язык: Английский

The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy DOI Creative Commons
Yuval Haber, Inbar Levkovich, Dorit Hadar‐Shoval

и другие.

JMIR Mental Health, Год журнала: 2024, Номер 11, С. e54781 - e54781

Опубликована: Апрель 18, 2024

This paper explores a significant shift in the field of mental health general and psychotherapy particular following generative artificial intelligence's new capabilities processing generating humanlike language. Following Freud, this lingo-technological development is conceptualized as "fourth narcissistic blow" that science inflicts on humanity. We argue blow has potentially dramatic influence perceptions human society, interrelationships, self. should, accordingly, expect changes therapeutic act emergence what we term third psychotherapy. The introduction an marks critical juncture, prompting us to ask important core questions address two basic elements thinking, namely, transparency autonomy: (1) What presence therapy relationships? (2) How does it reshape our perception ourselves interpersonal dynamics? (3) remains irreplaceable at therapy? Given ethical implications arise from these questions, proposes can be valuable asset when applied with insight consideration, enhancing but not replacing touch therapy.

Язык: Английский

Процитировано

27

Generative AI, IoT, and blockchain in healthcare: application, issues, and solutions DOI Creative Commons
Tehseen Mazhar,

Sunawar Khan,

Tariq Shahzad

и другие.

Discover Internet of Things, Год журнала: 2025, Номер 5(1)

Опубликована: Янв. 13, 2025

Язык: Английский

Процитировано

4

Large Language Models for Mental Health Applications: A Systematic Review (Preprint) DOI Creative Commons
Zhijun Guo, Alvina G. Lai, Johan H. Thygesen

и другие.

JMIR Mental Health, Год журнала: 2024, Номер 11, С. e57400 - e57400

Опубликована: Сен. 3, 2024

Background Large language models (LLMs) are advanced artificial neural networks trained on extensive datasets to accurately understand and generate natural language. While they have received much attention demonstrated potential in digital health, their application mental particularly clinical settings, has generated considerable debate. Objective This systematic review aims critically assess the use of LLMs specifically focusing applicability efficacy early screening, interventions, settings. By systematically collating assessing evidence from current studies, our work analyzes models, methodologies, data sources, outcomes, thereby highlighting challenges present, prospects for use. Methods Adhering PRISMA (Preferred Reporting Items Systematic Reviews Meta-Analyses) guidelines, this searched 5 open-access databases: MEDLINE (accessed by PubMed), IEEE Xplore, Scopus, JMIR, ACM Digital Library. Keywords used were (mental health OR illness disorder psychiatry) AND (large models). study included articles published between January 1, 2017, April 30, 2024, excluded languages other than English. Results In total, 40 evaluated, including 15 (38%) conditions suicidal ideation detection through text analysis, 7 (18%) as conversational agents, 18 (45%) applications evaluations health. show good effectiveness detecting issues providing accessible, destigmatized eHealth services. However, assessments also indicate that risks associated with might surpass benefits. These include inconsistencies text; production hallucinations; absence a comprehensive, benchmarked ethical framework. Conclusions examines inherent risks. The identifies several issues: lack multilingual annotated experts, concerns regarding accuracy reliability content, interpretability due “black box” nature LLMs, ongoing dilemmas. clear, framework; privacy issues; overreliance both physicians patients, which could compromise traditional medical practices. As result, should not be considered substitutes professional rapid development underscores valuable aids, emphasizing need continued research area. Trial Registration PROSPERO CRD42024508617; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=508617

Язык: Английский

Процитировано

18

Regulating AI in Mental Health: Ethics of Care Perspective DOI Creative Commons

Tamar Tavory

JMIR Mental Health, Год журнала: 2024, Номер 11, С. e58493 - e58493

Опубликована: Июль 20, 2024

This article contends that the responsible artificial intelligence (AI) approach-which is dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks impact of AI on human relationships. Focusing only principles reinforces a narrow concept accountability responsibility companies developing AI. proposes applying care to regulation can offer more comprehensive framework addresses AI's dual essential for effective in domain mental health care. The delves into emergence new "therapeutic" area facilitated by AI-based bots, which operate without therapist. highlights difficulties involved, mainly absence defined duty toward users, shows how implementing establish clear responsibilities developers. It also sheds light potential emotional manipulation risks involved. In conclusion, series considerations grounded developmental process AI-powered therapeutic tools.

Язык: Английский

Процитировано

11

Empowering pediatric, adolescent, and young adult patients with cancer utilizing generative AI chatbots to reduce psychological burden and enhance treatment engagement: a pilot study DOI Creative Commons
Joe Hasei,

Mana Hanzawa,

Akihito Nagano

и другие.

Frontiers in Digital Health, Год журнала: 2025, Номер 7

Опубликована: Фев. 25, 2025

Pediatric and adolescent/young adult (AYA) cancer patients face profound psychological challenges, exacerbated by limited access to continuous mental health support. While conventional therapeutic interventions often follow structured protocols, the potential of generative artificial intelligence (AI) chatbots provide conversational support remains unexplored. This study evaluates feasibility impact AI in alleviating distress enhancing treatment engagement this vulnerable population. Two age-appropriate chatbots, leveraging GPT-4, were developed natural, empathetic conversations without protocols. Five pediatric AYA participated a two-week intervention, engaging with via messaging platform. Pre- post-intervention anxiety stress levels self-reported, usage patterns analyzed assess chatbots' effectiveness. Four out five participants reported significant reductions post-intervention. Participants engaged chatbot every 2-3 days, sessions lasting approximately 10 min. All noted improved motivation, 80% disclosing personal concerns they had not shared healthcare providers. The 24/7 availability particularly benefited experiencing nighttime anxiety. pilot demonstrates complement traditional services addressing unmet needs patients. findings suggest these tools can serve as accessible, systems. Further large-scale studies are warranted validate promising results.

Язык: Английский

Процитировано

1

Transforming Perceptions: Exploring the Multifaceted Potential of Generative AI for People with Cognitive Disabilities (Preprint) DOI Creative Commons
Dorit Hadar‐Shoval, Yuval Haber, Amir Tal

и другие.

JMIR Neurotechnology, Год журнала: 2024, Номер unknown

Опубликована: Июль 10, 2024

Язык: Английский

Процитировано

2

Using GenAI to train mental health professionals in suicide risk assessment: Preliminary findings DOI
Zohar Elyoseph,

Inbar Levkovitch,

Yuval Haber

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2024, Номер unknown

Опубликована: Июль 17, 2024

Background Suicide risk assessment is a critical skill for mental health professionals (MHPs), yet traditional training in this area often limited. This study examined the potential of generative artificial intelligence (GenAI)-based simulator to enhance self- efficacy suicide among MHPs. Method A quasi-experimental was conducted with 43 MHPs from Israel. Participants attended an online seminar and interacted GenAI-powered simulator. They completed pre- post-intervention questionnaires measuring self-efficacy willingness treat suicidal patients. Qualitative data on user experience were collected. Results We found significant increase scores following intervention. Willingness patients presenting increased slightly but did not reach significance. feedback indicated that participants engaging valuable professional development. However, raised concerns about over-reliance AI need human supervision during training. Conclusion preliminary suggests GenAI-based simulators hold promise as tool MHPs’ competence assessment. further research larger samples control groups needed confirm these findings address ethical considerations surrounding use AI-powered simulation tools have democratize access high-quality health, potentially contributing global prevention efforts. their implementation should be carefully considered ensure they complement rather than replace expertise.

Язык: Английский

Процитировано

1

Entrepreneurs’ Social Capital in Overcoming Business Challenges: Case Studies of Seven Greentech, Climate Tech and Agritech Startups DOI Open Access

Michaela Carni,

Tamar Gur, Yossi Maaravi

и другие.

Sustainability, Год журнала: 2024, Номер 16(19), С. 8371 - 8371

Опубликована: Сен. 26, 2024

Environmental entrepreneurship has a vital role in addressing our planet’s critical environmental state by implementing innovative solutions to combat escalating threats. These ventures, however, face numerous challenges, including securing initial funding, navigating technical difficulties, and gaining market acceptance, which are magnified the pioneering nature of green innovations. Social capital is key facilitator, enabling entrepreneurs overcome obstacles through smart network management, trust, strategic partnerships. This study investigates social mitigating challenges faced entrepreneurs. We conducted semi-structured interviews with Our findings reveal how not only assists complexities ingrained but also an inherent part venture creation. insights emphasize importance advancing innovation. Theoretical practical implications discussed.

Язык: Английский

Процитировано

0

Exploring the Potential of Large Language Models in Verbal Intelligence Assessment: A Preliminary Investigation (Preprint) DOI
Dorit Hadar‐Shoval, Maya Lvovsky, Kfir Asraf

и другие.

Опубликована: Ноя. 11, 2024

BACKGROUND Cognitive assessment is an important component of applied psychology, but limited access and high costs make these evaluations challenging. OBJECTIVE This pilot study examined the feasibility using large language models (LLMs) to create personalized AI-based verbal comprehension tests (AI-BVCTs) for assessing intelligence, in contrast with traditional methods based on standardized norms. METHODS We used a within-subject design, comparing scores obtained from AI-BVCTs those Wechsler Adult Intelligence Scale (WAIS-III) Verbal Comprehension Index (VCI). RESULTS The concordance correlation coefficient (CCC) demonstrated strong agreement between AI-BVCT VCI (Claude: CCC = .752, 90% CI [.266, .933]; GPT-4: .733, [.170, .935]). Pearson correlations further supported findings, showing associations r .844, p < .001; .771, .025). No statistically significant differences were found (p > .05). These findings support potential LLMs assess intelligence. CONCLUSIONS attests promise cognitive increasing accessibility affordability processes, enabling testing. research also raises ethical concerns regarding privacy over-reliance AI clinical work. Further larger more diverse samples needed establish validity reliability this approach develop accurate scoring procedures.

Язык: Английский

Процитировано

0

Editorial: Responsible Design, Integration, and Use of Generative AI in Mental Health (Preprint) DOI Creative Commons
Oren Asman, John Torous, Amir Tal

и другие.

Опубликована: Дек. 21, 2024

BACKGROUND Editorial OBJECTIVE METHODS RESULTS CONCLUSIONS CLINICALTRIAL

Язык: Английский

Процитировано

0