Lecture notes in networks and systems, Год журнала: 2024, Номер unknown, С. 429 - 438
Опубликована: Дек. 16, 2024
Язык: Английский
Lecture notes in networks and systems, Год журнала: 2024, Номер unknown, С. 429 - 438
Опубликована: Дек. 16, 2024
Язык: Английский
Clinical Chemistry and Laboratory Medicine (CCLM), Год журнала: 2024, Номер 62(12), С. 2425 - 2434
Опубликована: Май 28, 2024
Abstract Objectives Laboratory medical reports are often not intuitively comprehensible to non-medical professionals. Given their recent advancements, easier accessibility and remarkable performance on licensing exams, patients therefore likely turn artificial intelligence-based chatbots understand laboratory results. However, empirical studies assessing the efficacy of these in responding real-life patient queries regarding medicine scarce. Methods Thus, this investigation included 100 inquiries from an online health forum, specifically addressing Complete Blood Count interpretation. The aim was evaluate proficiency three (ChatGPT, Gemini Le Chat) against responses certified physicians. Results findings revealed that chatbots’ interpretations results were inferior those While exhibited a higher degree empathetic communication, they frequently produced erroneous or overly generalized complex questions. appropriateness chatbot ranged 51 64 %, with 22 33 % overestimating conditions. A notable positive aspect consistent inclusion disclaimers its nature recommendations seek professional advice. Conclusions real highlight dangerous dichotomy – perceived trustworthiness potentially obscuring factual inaccuracies. growing inclination towards self-diagnosis using AI platforms, further research improvement is imperative increase patients’ awareness avoid future burdens healthcare system.
Язык: Английский
Процитировано
7BMC Research Notes, Год журнала: 2024, Номер 17(1)
Опубликована: Сен. 3, 2024
Язык: Английский
Процитировано
4Finance research letters, Год журнала: 2025, Номер 74, С. 106713 - 106713
Опубликована: Янв. 5, 2025
Язык: Английский
Процитировано
0Deleted Journal, Год журнала: 2025, Номер 2(1), С. 100104 - 100104
Опубликована: Янв. 31, 2025
Язык: Английский
Процитировано
0Cureus, Год журнала: 2025, Номер unknown
Опубликована: Фев. 26, 2025
Background Vernal keratoconjunctivitis (VKC) is a recurrent allergic eye disease that requires accurate patient education to ensure proper management. AI-driven chatbots, such as Google Gemini Advanced (Mountain View, California, US), are increasingly being explored potential tools for providing medical information. This study evaluates the accuracy, reliability, and clinical applicability of in addressing VKC-related queries. Objective To assess performance delivering medically relevant information about VKC evaluate its reliability based on expert ratings. Methods A total 125 responses generated by 25 questions were assessed two independent cornea specialists. Responses rated completeness, harm using 5-point Likert scale (1-5). Inter-rater was measured Cronbach's alpha. categorized into highly (score 5), minor inconsistencies 4), inaccurate (scores 1-3). Results demonstrated high inter-rater (Cronbach's alpha = 0.92, 95% CI: 0.87-0.94). Of responses, 108 (86.4%) 5) while 17 (13.6%) had 4) but posed no harm. No classified or potentially harmful. The combined mean score 4.88 ± 0.31, reflecting strong agreement between raters. chatbot consistently provided reliable across diagnostic, treatment, prognosis-related queries, with gaps complex grading treatment-related discussions. Discussion findings support use chatbots like ophthalmology. exhibited accuracy consistency, particularly general However, areas improvement remain, especially detailed guidance treatment protocols ensuring completeness questions. Conclusion demonstrates VKC, making it valuable tool education. While consistent generally accurate, oversight remains necessary refine AI-generated content applications. Further research needed enhance chatbots' ability provide nuanced advice integrate them safely ophthalmic decision-making.
Язык: Английский
Процитировано
0The American Surgeon, Год журнала: 2025, Номер unknown
Опубликована: Март 12, 2025
Background Large language models (LLMs) are advanced tools capable of understanding and generating human-like text. This study evaluated the accuracy several commercial LLMs in addressing clinical questions related to diagnosis management acute cholecystitis, as outlined Tokyo Guidelines 2018 (TG18). We assessed their congruence with expert panel discussions presented guidelines. Methods ChatGPT4.0, Gemini Advanced, GPTo1-preview on ten questions. Eight derived from TG18, two were formulated by authors. Two authors independently rated each LLM’s responses a four-point scale: (1) accurate comprehensive, (2) but not (3) partially accurate, inaccurate, (4) entirely inaccurate. A third author resolved any scoring discrepancies. Then, we comparatively analyzed performance ChatGPT4.0 against newer large (LLMs), specifically Advanced GPTo1-preview, same set delineate respective strengths limitations. Results provided consistent for 90% It delivered “accurate comprehensive” answers 4/10 (40%) 5/10 (50%). One response (10%) was “partially inaccurate.” demonstrated higher some yielded similar percentage inaccurate” responses. Notably, neither model produced “entirely answers. Discussion LLMs, such ChatGPT demonstrate potential accurately regarding cholecystitis. With awareness limitations, careful implementation, ongoing refinement, could serve valuable resources physician education patient information, potentially improving decision-making future.
Язык: Английский
Процитировано
0Cureus, Год журнала: 2025, Номер unknown
Опубликована: Март 18, 2025
Язык: Английский
Процитировано
0Journal of Clinical Neuroscience, Год журнала: 2025, Номер unknown, С. 111193 - 111193
Опубликована: Март 1, 2025
Язык: Английский
Процитировано
0Cureus, Год журнала: 2025, Номер unknown
Опубликована: Март 25, 2025
Язык: Английский
Процитировано
0Research Square (Research Square), Год журнала: 2024, Номер unknown
Опубликована: Апрель 12, 2024
Язык: Английский
Процитировано
1