Patient-facing chatbots: Enhancing healthcare accessibility while navigating digital literacy challenges and isolation risks—a mixed-methods study DOI Creative Commons
Annie Moore, Joy Ellis,

Natalia Dellavalle

et al.

Digital Health, Journal Year: 2025, Volume and Issue: 11

Published: April 1, 2025

Digital communication between patients and healthcare teams is increasing. Most find this effective, yet many remain digitally isolated, a social determinant of health. This study investigates patient attitudes toward healthcare's newest digital assistant, the chatbot, perceptions regarding access. We conducted mixed methods among users large system's chatbot integrated within an electronic health record. purposively oversampled by race ethnicity to survey 617/3089 (response rate 20%) online using de novo validated items. In addition, we semi-structured interviews with (n = 46) sampled based on diversity, age, or select responses November 2022 May 2024. surveys, 213/609 (35.0%) felt they could not understand completely, 376/614 (61.2%) did completely them. Of 238 who understood 178 (74.8%) believed was intended help them access healthcare; in comparison, 376 understood, 155 (41%) (p < 0.001). interviews, themes observed, Black, Hispanic, less educated, younger, lower-income participants expressed more positivity about aiding access, stating convenience perceived absence judgment bias. Patients' experience appears affect their perception intent chatbot's implementation; those adept at historically trusting groups may prefer quick, non-judgmental answer questions via rather than human interaction. Although our findings are limited one existing users, as patient-facing chatbots expand, attention these factors can support systems' efforts design that meet unique needs all patients, expressly risk isolation.

Language: Английский

Reference Hallucination Score for Medical Artificial Intelligence Chatbots: Development and Usability Study DOI Creative Commons
Fadi Aljamaan, Mohamad‐Hani Temsah, Ibraheem Altamimi

et al.

JMIR Medical Informatics, Journal Year: 2024, Volume and Issue: 12, P. e54345 - e54345

Published: July 3, 2024

Artificial intelligence (AI) chatbots have recently gained use in medical practice by health care practitioners. Interestingly, the output of these AI was found to varying degrees hallucination content and references. Such hallucinations generate doubts about their implementation.

Language: Английский

Citations

23

Chatbots in Health Care: Connecting Patients to Information DOI Open Access

Michelle Clark,

Sharon Bailey

Canadian Journal of Health Technologies, Journal Year: 2024, Volume and Issue: 4(1)

Published: Jan. 22, 2024

Why Is This an Issue? Artificial intelligence (AI) is increasingly being used in health care settings. Chatbots geared toward patient use are becoming more widely available, but the clinical evidence of their effectiveness remains limited. What Technology? AI-based chatbots computer programs or software applications that have been designed to engage simulated conversation with humans using humanlike language. can help save time and allow them focus on high-level creative strategic thinking by taking over routine repetitive tasks, such as automated customer service chats, appointments, staff scheduling. Potential Impact? Anyone access internet-enabled a smartphone could these information. provide patients 24/7 information, symptom assessment, supportive medication reminders, appointment scheduling, allowing information when providers unavailable. There appear be trends efficacy user satisfaction, support still established. Existing mostly free for access, although some developers charge fees additional features content. Some apps may prescribed providers. These covered insurance licensed developer. Else Do We Need Know? Ethical data privacy issues remain top mind considering widespread implementation settings. ChatGPT other AI tools were not developed specifically do necessarily level required information. They also trained historical datasets responses based most current recommendations data. The development AI-specific ethical frameworks facilitate safer consistent preventing misuse technologies minimizing spread misinformation. require human oversight terms moderation troubleshooting.

Language: Английский

Citations

21

Integrating ChatGPT in Medical Education: Adapting Curricula to Cultivate Competent Physicians for the AI Era DOI Open Access
Amr Jamal,

Mansur Solaiman,

Khalid Alhasan

et al.

Cureus, Journal Year: 2023, Volume and Issue: unknown

Published: Aug. 6, 2023

The rapid advancements in artificial intelligence (AI) language models, particularly ChatGPT (OpenAI, San Francisco, California, United States), necessitate the adaptation of medical education curricula to cultivate competent physicians AI era. In this editorial, we discuss short-term solutions and long-term adaptations for integrating into education. We recommend promoting digital literacy, developing critical thinking skills, emphasizing evidence-based relevance as quick fixes. Long-term include focusing on human factor, interprofessional collaboration, continuous professional development, research evaluation. By implementing these changes, educators can optimize era, ensuring students are well prepared a technologically advanced future healthcare.

Language: Английский

Citations

38

Artificial Hallucinations by Google Bard: Think Before You Leap DOI Open Access

Mukesh Kumar,

Utsav Anand Mani, Pranjal Tripathi

et al.

Cureus, Journal Year: 2023, Volume and Issue: unknown

Published: Aug. 10, 2023

One of the critical challenges posed by artificial intelligence (AI) tools like Google Bard (Google LLC, Mountain View, California, United States) is potential for "artificial hallucinations." These refer to instances where an AI chatbot generates fictional, erroneous, or unsubstantiated information in response queries. In research, such inaccuracies can lead propagation misinformation and undermine credibility scientific literature. The experience presented here highlights importance cross-checking provided with reliable sources maintaining a cautious approach when utilizing these research writing.

Language: Английский

Citations

32

Evaluating the Potential and Pitfalls of AI-Powered Conversational Agents as Human-like Virtual Health Carers in the Remote Management of Non-Communicable Diseases: A Scoping Review (Preprint) DOI Creative Commons
Sadia Azmin Anisha, Arkendu Sen,

Chris Bain

et al.

Journal of Medical Internet Research, Journal Year: 2024, Volume and Issue: 26, P. e56114 - e56114

Published: March 25, 2024

The rising prevalence of noncommunicable diseases (NCDs) worldwide and the high recent mortality rates (74.4%) associated with them, especially in low- middle-income countries, is causing a substantial global burden disease, necessitating innovative sustainable long-term care solutions.

Language: Английский

Citations

11

Can Artificial Intelligence “Hold” a Dermoscope?—The Evaluation of an Artificial Intelligence Chatbot to Translate the Dermoscopic Language DOI Creative Commons
Emmanouil Karampinis,

Olga Toli,

Konstantina-Eirini Georgopoulou

et al.

Diagnostics, Journal Year: 2024, Volume and Issue: 14(11), P. 1165 - 1165

Published: May 31, 2024

This survey represents the first endeavor to assess clarity of dermoscopic language by a chatbot, unveiling insights into interplay between dermatologists and AI systems within complexity language. Given complex, descriptive, metaphorical aspects language, subjective interpretations often emerge. The evaluated completeness diagnostic efficacy chatbot-generated reports, focusing on their role in facilitating accurate diagnoses educational opportunities for novice dermatologists. A total 30 participants were presented with hypothetical descriptions skin lesions, including cancers such as BCC, SCC, melanoma, cancer mimickers actinic seborrheic keratosis, dermatofibroma, atypical nevus, inflammatory dermatosis psoriasis alopecia areata. Each description was accompanied specific clinical information, tasked assessing differential diagnosis list generated chatbot its initial response. In each scenario, an extensive potential diagnoses, exhibiting lower performance cases SCC dermatoses, albeit without statistical significance, suggesting that equally satisfied responses provided. Scores decreased notably when practical signs Answers BCC scenario scores category (2.9 ± 0.4) higher than those (2.6 0.66,

Language: Английский

Citations

11

Critical review of self‐diagnosis of mental health conditions using artificial intelligence DOI
Supra Wimbarti, Bernabas H. R. Kairupan, Trina Ekawati Tallei

et al.

International Journal of Mental Health Nursing, Journal Year: 2024, Volume and Issue: 33(2), P. 344 - 358

Published: Feb. 12, 2024

Abstract The advent of artificial intelligence (AI) has revolutionised various aspects our lives, including mental health nursing. AI‐driven tools and applications have provided a convenient accessible means for individuals to assess their well‐being within the confines homes. Nonetheless, widespread trend self‐diagnosing conditions through AI poses considerable risks. This review article examines perils associated with relying on self‐diagnosis in health, highlighting constraints possible adverse outcomes that can arise from such practices. It delves into ethical, psychological, social implications, underscoring vital role professionals, psychologists, psychiatrists, nursing specialists, providing professional assistance guidance. aims highlight importance seeking guidance addressing concerns, especially era self‐diagnosis.

Language: Английский

Citations

8

ChatGPT-3.5 System Usability Scale early assessment among Healthcare Workers: Horizons of adoption in medical practice DOI Creative Commons
Fadi Aljamaan, Khalid H. Malki, Khalid Alhasan

et al.

Heliyon, Journal Year: 2024, Volume and Issue: 10(7), P. e28962 - e28962

Published: April 1, 2024

Artificial intelligence (AI) chatbots, such as ChatGPT, have widely invaded all domains of human life. They the potential to transform healthcare future. However, their effective implementation hinges on workers' (HCWs) adoption and perceptions. This study aimed evaluate HCWs usability ChatGPT three months post-launch in Saudi Arabia using System Usability Scale (SUS). A total 194 participated survey. Forty-seven percent were satisfied with usage, 57 % expressed moderate high trust its ability generate medical decisions. 58 expected would improve patients' outcomes, even though 84 optimistic future practice. possible concerns like recommending harmful decisions medicolegal implications. The overall mean SUS score was 64.52, equivalent 50 percentile rank, indicating marginal acceptability system. strongest positive predictors scores participants' belief AI chatbot's benefits research, self-rated familiarity computer skills proficiency. Participants' learnability ease use correlated positively but weakly. On other hand, students interns had significantly compared others, while very strongly perception impact Our findings highlight HCWs' perceived acceptance at current stage optimism supporting them practice, especially research domain, addition humble ambition outcomes particularly regard end, it underscores need for ongoing efforts build address ethical legal implications healthcare. contributes growing body literature chatbots healthcare, addressing improvement strategies provides insights policymakers providers about challenges implementing

Language: Английский

Citations

8

Performance of Artificial Intelligence (AI)-Powered Chatbots in the Assessment of Medical Case Reports: Qualitative Insights From Simulated Scenarios DOI Open Access

Florian Reis,

Christian Lenz

Cureus, Journal Year: 2024, Volume and Issue: unknown

Published: Feb. 9, 2024

Introduction With the expanding awareness and use of AI-powered chatbots, it seems possible that an increasing number people could them to assess evaluate their medical symptoms. If chatbots are used for this purpose, have not previously undergone a thorough evaluation specific use, various risks might arise. The aim study is analyze compare performance popular in differentiating between severe less critical symptoms described from patient's perspective examine variations substantive assessment accuracy empathetic communication style among chatbots' responses. Materials methods Our compared three different AI-supported - OpenAI’s ChatGPT 3.5, Microsoft’s Bing Chat, Inflection’s Pi AI. Three exemplary case reports emergencies as well cases without urgent reason emergency admission were constructed analyzed. Each report was accompanied by identical questions concerning most likely suspected diagnosis urgency immediate evaluation. respective answers qualitatively with each other regarding differential diagnoses mentioned conclusions drawn, patient-oriented language. Results All examined capable providing medically plausible probable classifying situations acute or critical. However, responses varied slightly level assessment. Clear differences be seen detail diagnoses, overall length answers, how chatbot dealt challenge being confronted issues. given comparable terms empathy comprehensibility. Conclusion Even AI designed applications already offer substantial guidance assessing typical indications but should always provided disclaimer. In responding queries, characteristic emerge extent answers. Given lack supervision many established subsequent studies, experiences essential clarify whether more extensive these concerns will positive impact on healthcare rather pose major risks.

Language: Английский

Citations

7

Assessing the response quality and readability of chatbots in cardiovascular health, oncology, and psoriasis: A comparative study DOI Creative Commons
Robert Olszewski, Klaudia Watros, Małgorzata Mańczak

et al.

International Journal of Medical Informatics, Journal Year: 2024, Volume and Issue: 190, P. 105562 - 105562

Published: Oct. 1, 2024

Chatbots using the Large Language Model (LLM) generate human responses to questions from all categories. Due staff shortages in healthcare systems, patients waiting for an appointment increasingly use chatbots get information about their condition. Given number of currently available, assessing they is essential.

Language: Английский

Citations

6