Lecture notes in networks and systems, Journal Year: 2024, Volume and Issue: unknown, P. 429 - 438
Published: Dec. 16, 2024
Language: Английский
Lecture notes in networks and systems, Journal Year: 2024, Volume and Issue: unknown, P. 429 - 438
Published: Dec. 16, 2024
Language: Английский
Clinical Chemistry and Laboratory Medicine (CCLM), Journal Year: 2024, Volume and Issue: 62(12), P. 2425 - 2434
Published: May 28, 2024
Abstract Objectives Laboratory medical reports are often not intuitively comprehensible to non-medical professionals. Given their recent advancements, easier accessibility and remarkable performance on licensing exams, patients therefore likely turn artificial intelligence-based chatbots understand laboratory results. However, empirical studies assessing the efficacy of these in responding real-life patient queries regarding medicine scarce. Methods Thus, this investigation included 100 inquiries from an online health forum, specifically addressing Complete Blood Count interpretation. The aim was evaluate proficiency three (ChatGPT, Gemini Le Chat) against responses certified physicians. Results findings revealed that chatbots’ interpretations results were inferior those While exhibited a higher degree empathetic communication, they frequently produced erroneous or overly generalized complex questions. appropriateness chatbot ranged 51 64 %, with 22 33 % overestimating conditions. A notable positive aspect consistent inclusion disclaimers its nature recommendations seek professional advice. Conclusions real highlight dangerous dichotomy – perceived trustworthiness potentially obscuring factual inaccuracies. growing inclination towards self-diagnosis using AI platforms, further research improvement is imperative increase patients’ awareness avoid future burdens healthcare system.
Language: Английский
Citations
7BMC Research Notes, Journal Year: 2024, Volume and Issue: 17(1)
Published: Sept. 3, 2024
Language: Английский
Citations
4Finance research letters, Journal Year: 2025, Volume and Issue: 74, P. 106713 - 106713
Published: Jan. 5, 2025
Language: Английский
Citations
0Deleted Journal, Journal Year: 2025, Volume and Issue: 2(1), P. 100104 - 100104
Published: Jan. 31, 2025
Language: Английский
Citations
0Cureus, Journal Year: 2025, Volume and Issue: unknown
Published: Feb. 26, 2025
Background Vernal keratoconjunctivitis (VKC) is a recurrent allergic eye disease that requires accurate patient education to ensure proper management. AI-driven chatbots, such as Google Gemini Advanced (Mountain View, California, US), are increasingly being explored potential tools for providing medical information. This study evaluates the accuracy, reliability, and clinical applicability of in addressing VKC-related queries. Objective To assess performance delivering medically relevant information about VKC evaluate its reliability based on expert ratings. Methods A total 125 responses generated by 25 questions were assessed two independent cornea specialists. Responses rated completeness, harm using 5-point Likert scale (1-5). Inter-rater was measured Cronbach's alpha. categorized into highly (score 5), minor inconsistencies 4), inaccurate (scores 1-3). Results demonstrated high inter-rater (Cronbach's alpha = 0.92, 95% CI: 0.87-0.94). Of responses, 108 (86.4%) 5) while 17 (13.6%) had 4) but posed no harm. No classified or potentially harmful. The combined mean score 4.88 ± 0.31, reflecting strong agreement between raters. chatbot consistently provided reliable across diagnostic, treatment, prognosis-related queries, with gaps complex grading treatment-related discussions. Discussion findings support use chatbots like ophthalmology. exhibited accuracy consistency, particularly general However, areas improvement remain, especially detailed guidance treatment protocols ensuring completeness questions. Conclusion demonstrates VKC, making it valuable tool education. While consistent generally accurate, oversight remains necessary refine AI-generated content applications. Further research needed enhance chatbots' ability provide nuanced advice integrate them safely ophthalmic decision-making.
Language: Английский
Citations
0The American Surgeon, Journal Year: 2025, Volume and Issue: unknown
Published: March 12, 2025
Background Large language models (LLMs) are advanced tools capable of understanding and generating human-like text. This study evaluated the accuracy several commercial LLMs in addressing clinical questions related to diagnosis management acute cholecystitis, as outlined Tokyo Guidelines 2018 (TG18). We assessed their congruence with expert panel discussions presented guidelines. Methods ChatGPT4.0, Gemini Advanced, GPTo1-preview on ten questions. Eight derived from TG18, two were formulated by authors. Two authors independently rated each LLM’s responses a four-point scale: (1) accurate comprehensive, (2) but not (3) partially accurate, inaccurate, (4) entirely inaccurate. A third author resolved any scoring discrepancies. Then, we comparatively analyzed performance ChatGPT4.0 against newer large (LLMs), specifically Advanced GPTo1-preview, same set delineate respective strengths limitations. Results provided consistent for 90% It delivered “accurate comprehensive” answers 4/10 (40%) 5/10 (50%). One response (10%) was “partially inaccurate.” demonstrated higher some yielded similar percentage inaccurate” responses. Notably, neither model produced “entirely answers. Discussion LLMs, such ChatGPT demonstrate potential accurately regarding cholecystitis. With awareness limitations, careful implementation, ongoing refinement, could serve valuable resources physician education patient information, potentially improving decision-making future.
Language: Английский
Citations
0Cureus, Journal Year: 2025, Volume and Issue: unknown
Published: March 18, 2025
Language: Английский
Citations
0Journal of Clinical Neuroscience, Journal Year: 2025, Volume and Issue: unknown, P. 111193 - 111193
Published: March 1, 2025
Language: Английский
Citations
0Cureus, Journal Year: 2025, Volume and Issue: unknown
Published: March 25, 2025
Language: Английский
Citations
0Research Square (Research Square), Journal Year: 2024, Volume and Issue: unknown
Published: April 12, 2024
Language: Английский
Citations
1