European Archives of Oto-Rhino-Laryngology, Год журнала: 2024, Номер unknown
Опубликована: Дек. 26, 2024
Язык: Английский
European Archives of Oto-Rhino-Laryngology, Год журнала: 2024, Номер unknown
Опубликована: Дек. 26, 2024
Язык: Английский
Dental Traumatology, Год журнала: 2024, Номер unknown
Опубликована: Ноя. 22, 2024
ABSTRACT Background/Aim Artificial intelligence (AI) chatbots have become increasingly prevalent in recent years as potential sources of online healthcare information for patients when making medical/dental decisions. This study assessed the readability, quality, and accuracy responses provided by three AI to questions related traumatic dental injuries (TDIs), either retrieved from popular question‐answer sites or manually created based on hypothetical case scenarios. Materials Methods A total 59 injury queries were directed at ChatGPT 3.5, 4.0, Google Gemini. Readability was evaluated using Flesch Reading Ease (FRE) Flesch–Kincaid Grade Level (FKGL) scores. To assess response quality accuracy, DISCERN tool, Global Quality Score (GQS), misinformation scores used. The understandability actionability analyzed Patient Education Assessment Tool Printed (PEMAT‐P) tool. Statistical analysis included Kruskal–Wallis with Dunn's post hoc test non‐normal variables, one‐way ANOVA Tukey's normal variables ( p < 0.05). Results mean FKGL FRE Gemini 11.2 49.25, 11.8 46.42, 10.1 51.91, respectively, indicating that difficult read required a college‐level reading ability. 3.5 had lowest PEMAT‐P among 0.001). 4.0 rated higher (GQS score 5) compared Conclusions In this study, although widely used, some misleading inaccurate about TDIs. contrast, generated more accurate comprehensive answers, them reliable auxiliary sources. However, complex issues like TDIs, no chatbot can replace dentist diagnosis, treatment, follow‐up care.
Язык: Английский
Процитировано
5Clinical Otolaryngology, Год журнала: 2025, Номер unknown
Опубликована: Янв. 7, 2025
ABSTRACT Introduction Artificial intelligence (AI) based chat robots are increasingly used by users for patient education about common diseases in the health field, as every field. This study aims to evaluate and compare materials on rhinosinusitis created two frequently robots, ChatGPT‐4 Google Gemini. Method One hundred nine questions taken from information websites were divided into 4 different categories: general knowledge, diagnosis, treatment, surgery complications, then asked robots. The answers given evaluated expert otolaryngologists, where scores different, a third, more experienced otolaryngologist finalised evaluation. Questions scored 1 4: (1) comprehensive/correct, (2) incomplete/partially correct, (3) accurate inaccurate data, potentially misleading (4) completely inaccurate/irrelevant. Results In evaluating ChatGPT‐4, all Diagnosis category comprehensive/correct. evaluation of Gemini, inaccurate/irrelevant treatment found be statistically significantly higher, correct complications higher. comparison between category, had higher rate than Gemini was significant. Conclusion regarding sufficient informative.
Язык: Английский
Процитировано
0Research Square (Research Square), Год журнала: 2025, Номер unknown
Опубликована: Янв. 20, 2025
Язык: Английский
Процитировано
0Опубликована: Фев. 4, 2025
Язык: Английский
Процитировано
0PEC Innovation, Год журнала: 2025, Номер unknown, С. 100390 - 100390
Опубликована: Апрель 1, 2025
Язык: Английский
Процитировано
0European Archives of Oto-Rhino-Laryngology, Год журнала: 2024, Номер unknown
Опубликована: Дек. 26, 2024
Язык: Английский
Процитировано
0