Evaluation of Chatbots in the Emergency Management of Avulsion Injuries DOI Creative Commons
Şeyma Mustuloğlu, Büşra Pınar Deniz

Dental Traumatology, Год журнала: 2025, Номер unknown

Опубликована: Янв. 24, 2025

ABSTRACT Background This study assessed the accuracy and consistency of responses provided by six Artificial Intelligence (AI) applications, ChatGPT version 3.5 (OpenAI), 4 4.0 Perplexity (Perplexity.AI), Gemini (Google), Copilot (Bing), to questions related emergency management avulsed teeth. Materials Methods Two pediatric dentists developed 18 true or false regarding dental avulsion asked public chatbots for 3 days. The were recorded compared with correct answers. SPSS program was used calculate obtained accuracies their consistency. Results achieved highest rate 95.6% over entire time frame, while (Perplexity.AI) had lowest 67.2%. (OpenAI) only AI that perfect agreement real answers, except at noon on day 1. showed weakest (6 times). Conclusions With exception ChatGPT's paid version, 4.0, do not seem ready use as main resource in managing teeth during emergencies. It might prove beneficial incorporate International Association Dental Traumatology (IADT) guidelines chatbot databases, enhancing

Язык: Английский

Evaluation of validity and reliability of AI Chatbots as public sources of information on dental trauma DOI
A Johnson, Tarun Kumar Singh, Aakash Gupta

и другие.

Dental Traumatology, Год журнала: 2024, Номер unknown

Опубликована: Окт. 17, 2024

This study aimed to assess the validity and reliability of AI chatbots, including Bing, ChatGPT 3.5, Google Gemini, Claude AI, in addressing frequently asked questions (FAQs) related dental trauma.

Язык: Английский

Процитировано

5

Performance of Artificial Intelligence Chatbots in Responding to Patient Queries Related to Traumatic Dental Injuries: A Comparative Study DOI
Yeliz Güven, Omer Tarik Ozdemir, Melis Yazır Kavan

и другие.

Dental Traumatology, Год журнала: 2024, Номер unknown

Опубликована: Ноя. 22, 2024

ABSTRACT Background/Aim Artificial intelligence (AI) chatbots have become increasingly prevalent in recent years as potential sources of online healthcare information for patients when making medical/dental decisions. This study assessed the readability, quality, and accuracy responses provided by three AI to questions related traumatic dental injuries (TDIs), either retrieved from popular question‐answer sites or manually created based on hypothetical case scenarios. Materials Methods A total 59 injury queries were directed at ChatGPT 3.5, 4.0, Google Gemini. Readability was evaluated using Flesch Reading Ease (FRE) Flesch–Kincaid Grade Level (FKGL) scores. To assess response quality accuracy, DISCERN tool, Global Quality Score (GQS), misinformation scores used. The understandability actionability analyzed Patient Education Assessment Tool Printed (PEMAT‐P) tool. Statistical analysis included Kruskal–Wallis with Dunn's post hoc test non‐normal variables, one‐way ANOVA Tukey's normal variables ( p < 0.05). Results mean FKGL FRE Gemini 11.2 49.25, 11.8 46.42, 10.1 51.91, respectively, indicating that difficult read required a college‐level reading ability. 3.5 had lowest PEMAT‐P among 0.001). 4.0 rated higher (GQS score 5) compared Conclusions In this study, although widely used, some misleading inaccurate about TDIs. contrast, generated more accurate comprehensive answers, them reliable auxiliary sources. However, complex issues like TDIs, no chatbot can replace dentist diagnosis, treatment, follow‐up care.

Язык: Английский

Процитировано

5

Can ChatGPT-4 perform as a competent physician based on the Chinese critical care examination? DOI
Xueqi Wang, Jin Tang,

Y Feng

и другие.

Journal of Critical Care, Год журнала: 2025, Номер 86, С. 155010 - 155010

Опубликована: Янв. 5, 2025

Язык: Английский

Процитировано

0

Performance of Four AI Chatbots in Answering Endodontic Questions DOI
Saleem Abdulrab, Hisham Abada, Mohammed Mashyakhy

и другие.

Journal of Endodontics, Год журнала: 2025, Номер unknown

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Evaluation of Chatbots in the Emergency Management of Avulsion Injuries DOI Creative Commons
Şeyma Mustuloğlu, Büşra Pınar Deniz

Dental Traumatology, Год журнала: 2025, Номер unknown

Опубликована: Янв. 24, 2025

ABSTRACT Background This study assessed the accuracy and consistency of responses provided by six Artificial Intelligence (AI) applications, ChatGPT version 3.5 (OpenAI), 4 4.0 Perplexity (Perplexity.AI), Gemini (Google), Copilot (Bing), to questions related emergency management avulsed teeth. Materials Methods Two pediatric dentists developed 18 true or false regarding dental avulsion asked public chatbots for 3 days. The were recorded compared with correct answers. SPSS program was used calculate obtained accuracies their consistency. Results achieved highest rate 95.6% over entire time frame, while (Perplexity.AI) had lowest 67.2%. (OpenAI) only AI that perfect agreement real answers, except at noon on day 1. showed weakest (6 times). Conclusions With exception ChatGPT's paid version, 4.0, do not seem ready use as main resource in managing teeth during emergencies. It might prove beneficial incorporate International Association Dental Traumatology (IADT) guidelines chatbot databases, enhancing

Язык: Английский

Процитировано

0