Evaluation of Vertigo-Related Information from Artificial Intelligence Chatbot DOI Creative Commons
Xu Liu,

Suming Shi,

Xin Zhang

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Сен. 2, 2024

Abstract Objective: To compare the diagnostic accuracy of an artificial intelligence chatbot and clinical experts in managing vertigo-related diseases evaluate ability AI to address issues. Methods: 20 questions about vertigo were input into ChatGPT-4o, three otologists evaluated responses using a 5-point Likert scale for accuracy, comprehensiveness, clarity, practicality, credibility. Readability was assessed Flesch Reading Ease Flesch-Kincaid Grade Level formulas. The model two diagnosed 15 outpatient cases, their calculated. Statistical analysis used ANOVA paired t-tests. Results: ChatGPT-4o scored highest credibility (4.78). Repeated Measures showed significant differences across dimensions (F=2.682, p=0.038). revealed higher difficulty texts. model's comparable clinician with one year experience but inferior five years (p=0.04). Conclusion: shows promise as supplementary tool requires improvements readability capabilities.

Язык: Английский

Validation of the Quality Analysis of Medical Artificial Intelligence (QAMAI) tool: a new tool to assess the quality of health information provided by AI platforms DOI Creative Commons
Luigi Angelo Vaira, Jérôme R. Lechien, Vincenzo Abbate

и другие.

European Archives of Oto-Rhino-Laryngology, Год журнала: 2024, Номер 281(11), С. 6123 - 6131

Опубликована: Май 4, 2024

The widespread diffusion of Artificial Intelligence (AI) platforms is revolutionizing how health-related information disseminated, thereby highlighting the need for tools to evaluate quality such information. This study aimed propose and validate Quality Assessment Medical (QAMAI), a tool specifically designed assess health provided by AI platforms.

Язык: Английский

Процитировано

22

Assessing ChatGPT’s theoretical knowledge and prescriptive accuracy in bacterial infections: a comparative study with infectious diseases residents and specialists DOI Creative Commons
Andrea De Vito, Nicholas Geremia, Andrea Marıno

и другие.

Infection, Год журнала: 2024, Номер unknown

Опубликована: Июль 12, 2024

Abstract Objectives Advancements in Artificial Intelligence(AI) have made platforms like ChatGPT increasingly relevant medicine. This study assesses ChatGPT’s utility addressing bacterial infection-related questions and antibiogram-based clinical cases. Methods involved a collaborative effort involving infectious disease (ID) specialists residents. A group of experts formulated six true/false, open-ended questions, cases with antibiograms for four types infections (endocarditis, pneumonia, intra-abdominal infections, bloodstream infection) total 96 questions. The were submitted to senior residents ID inputted into ChatGPT-4 trained version ChatGPT-4. 720 responses obtained reviewed by blinded panel antibiotic treatments. They evaluated the accuracy completeness, ability identify correct resistance mechanisms from antibiograms, appropriateness antibiotics prescriptions. Results No significant difference was noted among groups true/false approximately 70% answers. offered more accurate complete answers than both specialists. Regarding case, we observed lower recognize mechanism. tended not prescribe newer cefiderocol or imipenem/cilastatin/relebactam, favoring less recommended options colistin. Both trained- longer necessary treatment periods (p-value = 0.022). Conclusions highlights capabilities limitations medical decision-making, specifically regarding antibiogram analysis. While demonstrated proficiency answering theoretical it did consistently align expert decisions case management. Despite these limitations, potential as supportive tool education preliminary analysis is evident. However, should replace consultation, especially complex decision-making.

Язык: Английский

Процитировано

13

Reliability of large language models for advanced head and neck malignancies management: a comparison between ChatGPT 4 and Gemini Advanced DOI Creative Commons
Andrea De Lorenzi,

Giorgia Pugliese,

Antonino Maniaci

и другие.

European Archives of Oto-Rhino-Laryngology, Год журнала: 2024, Номер 281(9), С. 5001 - 5006

Опубликована: Май 25, 2024

Abstract Purpose This study evaluates the efficacy of two advanced Large Language Models (LLMs), OpenAI’s ChatGPT 4 and Google’s Gemini Advanced, in providing treatment recommendations for head neck oncology cases. The aim is to assess their utility supporting multidisciplinary oncological evaluations decision-making processes. Methods comparative analysis examined responses Advanced five hypothetical cases cancer, each representing a different anatomical subsite. were evaluated against latest National Comprehensive Cancer Network (NCCN) guidelines by blinded panels using total disagreement score (TDS) artificial intelligence performance instrument (AIPI). Statistical assessments performed Wilcoxon signed-rank test Friedman test. Results Both LLMs produced relevant with generally outperforming regarding adherence comprehensive planning. showed higher AIPI scores (median 3 [2–4]) compared 2 [2–3]), indicating better overall performance. Notably, inconsistencies observed management induction chemotherapy surgical decisions, such as dissection. Conclusions While both demonstrated potential aid oncology, discrepancies certain critical areas highlight need further refinement. supports growing role AI enhancing clinical but also emphasizes necessity continuous updates validation current standards integrate into healthcare practices fully.

Язык: Английский

Процитировано

10

ChatGPT‐4 Consistency in Interpreting Laryngeal Clinical Images of Common Lesions and Disorders DOI
Antonino Maniaci, Carlos M. Chiesa‐Estomba, Jérôme R. Lechien

и другие.

Otolaryngology, Год журнала: 2024, Номер 171(4), С. 1106 - 1113

Опубликована: Июль 24, 2024

Abstract Objective To investigate the consistency of Chatbot Generative Pretrained Transformer (ChatGPT)‐4 in analysis clinical pictures common laryngological conditions. Study Design Prospective uncontrolled study. Setting Multicenter Methods Patient history and videolaryngostroboscopic images were presented to ChatGPT‐4 for differential diagnoses, management, treatment(s). responses assessed by 3 blinded laryngologists with artificial intelligence performance instrument (AIPI). The complexity cases between practitioners interpreting evaluated a 5‐point Likert Scale. intraclass correlation coefficient (ICC) was used measure strength interrater agreement. Results Forty patients mean score 2.60 ± 1.15. included. image interpretation 2.46 1.42. perfectly analyzed 6 (15%; 5/5), while GPT‐4 judges high 5 (12.5%; 4/5). Judges reported an ICC 0.965 ( P = .001). erroneously documented vocal fold irregularity (mass or lesion), glottic insufficiency, cord paralysis 21 (52.5%), 2 (0.05%), (12.5%) cases, respectively. indicated 153 63 additional examinations, respectively primary diagnosis correct 20.0% 25.0% cases. significantly associated AIPI r s 0.830; Conclusion is more efficient diagnosis, rather than analysis, selecting most adequate examinations treatments.

Язык: Английский

Процитировано

6

Enhancing AI Chatbot Responses in Health Care: The SMART Prompt Structure in Head and Neck Surgery DOI Creative Commons
Luigi Angelo Vaira, Jérôme R. Lechien, Vincenzo Abbate

и другие.

OTO Open, Год журнала: 2025, Номер 9(1)

Опубликована: Янв. 1, 2025

Abstract Objective This study aims to evaluate the impact of prompt construction on quality artificial intelligence (AI) chatbot responses in context head and neck surgery. Study Design Observational evaluative study. Setting An international collaboration involving 16 researchers from 11 European centers specializing Methods A total 24 questions, divided into clinical scenarios, theoretical patient inquiries, were developed. These questions entered ChatGPT‐4o both with without use a structured format, known as SMART (Seeker, Mission, AI Role, Register, Targeted Question). The AI‐generated evaluated by experienced surgeons using Quality Analysis Medical Artificial Intelligence instrument (QAMAI), which assesses accuracy, clarity, relevance, completeness, source quality, usefulness. Results generated scored significantly higher across all QAMAI dimensions compared those contextualized prompts. Median scores for prompts 27.5 (interquartile range [IQR] 25‐29) versus (IQR 21.8‐25) unstructured ( P < .001). Clinical scenarios inquiries showed most significant improvements, while also benefited, but lesser extent. AI's improved notably prompt, particularly questions. Conclusion suggests that format enhances approach improves completeness information, underscoring importance well‐constructed applications. Further research is warranted explore applicability different medical specialties platforms.

Язык: Английский

Процитировано

0

The Challenges of Using ChatGPT for Clinical Decision Support in Orthopaedic Surgery: A Pilot Study DOI

M McNamara,

Brandon G. Hill, Peter L. Schilling

и другие.

Journal of the American Academy of Orthopaedic Surgeons, Год журнала: 2025, Номер unknown

Опубликована: Март 26, 2025

Background: Artificial intelligence (AI) technologies have recently exploded in both accessibility and applicability, including health care. Although studies demonstrated its ability to adequately answer simple patient issues or multiple-choice questions, capacity for deeper complex decision making within care is relatively untested. In this study, we aimed delve into AI's integrate multiple clinical data sources produce a reasonable assessment plan, specifically the setting of an orthopaedic surgery consultant. Methods: Ten common fractures seen by surgeons emergency department were chosen. Consult notes from patients sustaining each these fractures, at level 1 academic trauma center between 2022 2023, stripped data. The history, physical examination, imaging interpretations then given ChatGPT4 raw semistructured formats. AI was asked determine plan as if it surgeon. generated plans compared with actual course patient, determined our multispecialty conference. Results: When formats data, safe that included final outcome scenario. Evaluating large language models ongoing field research without established quantitative rubric; therefore, conclusions rely on subjective comparison. Conclusion: interpretations, ChatGPT able synthesize most importantly surgeons. challenge; however, using courses “benchmark” comparison presents possible avenue further research.

Язык: Английский

Процитировано

0

AI in clinical decision-making: ChatGPT-4 vs. Llama2 for otolaryngology cases DOI
Antonino Maniaci, Cosima C. Hoch,

Lise Sogalow

и другие.

European Archives of Oto-Rhino-Laryngology, Год журнала: 2025, Номер unknown

Опубликована: Апрель 12, 2025

Язык: Английский

Процитировано

0

Assessing the clinical support capabilities of ChatGPT 4o and ChatGPT 4o mini in managing lumbar disc herniation DOI Creative Commons
Suning Wang, Ying Wang, Linlin Jiang

и другие.

European journal of medical research, Год журнала: 2025, Номер 30(1)

Опубликована: Янв. 22, 2025

Язык: Английский

Процитировано

0

In Reference to Utilization of Artificial Intelligence in the Creation of Patient Information on Laryngology Topics DOI Open Access
Luigi Angelo Vaira, Giacomo De Riu, Antonino Maniaci

и другие.

The Laryngoscope, Год журнала: 2025, Номер unknown

Опубликована: Фев. 11, 2025

The LaryngoscopeEarly View Letter to the Editor In Reference Utilization of Artificial Intelligence in Creation Patient Information on Laryngology Topics Luigi Angelo Vaira MD, PhD, Corresponding Author PhD [email protected] orcid.org/0000-0002-7789-145X Maxillofacial Surgery Operative Unit, Department Medical, Surgical and Experimental Sciences, University Sassari, Italy Send correspondence Vaira, Viale San Pietro 43/B, Italy. Email: protected]Search for more papers by this authorGiacomo De Riu Giacomo MD ItalySearch authorAntonino Maniaci Antonino orcid.org/0000-0002-1251-0185 Medicine Surgery, Enna Kore, Enna, authorMiguel Mayo-Yáñez Miguel orcid.org/0000-0002-1829-6597 Otorhinolaryngology, Head Neck Department, Complexo Hospitalario Universitario A Coruña (CHUAC), Coruña, Galicia, SpainSearch authorAlberto Maria Saibene MA, Alberto MA orcid.org/0000-0003-1457-6871 Otolaryngology Santi Paolo e Carlo Hospital, Health Milan, authorCarlos Chiesa-Estomba MS, Carlos MS orcid.org/0000-0001-9454-9464 Otorhinolaryngology-Head & Hospital Donostia, Sebastian, authorJerome R. Lechien FACS, Jerome FACS orcid.org/0000-0002-0845-0845 Mons School Medicine, UMONS Research Institute Sciences Technology, (UMons), Mons, Belgium Otolaryngology-Head Elsan Paris, FranceSearch author First published: 11 February 2025 https://doi.org/10.1002/lary.32032 Editor's Note: This Manuscript was accepted publication December 02, 2024. authors have no other funding, financial relationships, or conflicts interest disclose. work has been developed within framework project e.INS—Ecosystem Innovation Next Generation Sardinia (cod. ECS 00000038) funded Italian Ministry Education (MUR) under National Recovery Resilience Plan (NRRP)—MISSION 4 COMPONENT 2, "From research business" INVESTMENT 1.5, "Creation strengthening Ecosystems innovation," construction "Territorial R&D Leaders," CUP J83C21000320007. Read full textAboutPDF ToolsRequest permissionExport citationAdd favoritesTrack citation ShareShare Give accessShare text full-text accessPlease review our Terms Conditions Use check box below share version article.I read accept Wiley Online Library UseShareable LinkUse link a article with your friends colleagues. Learn more.Copy URL Share linkShare onEmailFacebookxLinkedInRedditWechat No abstract is available article. BIBLIOGRAPHY 1Tram QL, Huynh PP, Le B, Jiang N. artificial intelligence creation patient information laryngology topics. Laryngoscope. https://doi.org/10.1002/lary.31891. 10.1002/lary.31891 Google Scholar 2Vaira LA, JR, Abbate V, et al. Accuracy ChatGPT-generated head neck Oromaxillofacial surgery: multicenter collaborative analysis. Otolaryngol Surg. 2024; 170: 1492-1503. 10.1002/ohn.489 PubMedGoogle 3Lechien Naunheim MR, A, Performance consistency ChatGPT-4 versus otolaryngologists: clinical case series. 1519-1526. 10.1002/ohn.759 4Lorenzi Pugliese G, Reliability large language models advanced malignancies management: comparison between ChatGPT Gemini advanced. Eur Arch Otorhinolaryngol. 281: 5001-5006. https://doi.org/10.1007/s00405-024-08746-2. 10.1007/s00405-024-08746-2 5Teixeira-Marques F, Medeiros N, Nazaré Exploring role decision-making otorhinolaryngology: designed study. 281(4): 2023-2030. 10.1007/s00405-024-08498-z 6Lechien Briganti LA. ChatGPT-3.5 −4 providing scientific references otolaryngology-head surgery. 2159-2165. 10.1007/s00405-023-08441-8 7Mayo-Yáñez M, Maria-Saibene CM. Examining performance 3.5 Microsoft copilot otolaryngology: comparative study otolaryngologists' evaluation. Indian J 76(4): 3465-3469. 10.1007/s12070-024-04729-1 8Massey PA, Montgomery C, Zhang AS. Comparison ChatGPT-3.5, ChatGPT-4, Orthopaedic resident assessment examinations. Am Acad Orthop 2023; 31(23): 1173-1179. 10.5435/JAAOS-D-23-00396 9Vaira Validation quality analysis medical (QAMAI) tool: new tool assess health provided AI platforms. 281(11): 6123-6131. 10.1007/s00405-024-08710-0 10Lechien Gengler I, Hans S, CM, Validity reliability an instrument evaluating intelligent chatbot: (AIPI). 2063-2079. 10.1007/s00405-023-08219-y 11Campbell DJ, Estephan LE, Sina EM, Evaluating responses thyroid nodules education. Thyroid. 34(3): 371-377. 10.1089/thy.2023.0491 12Lee TJ, Campbell Rao AK, atrial fibrillation Cureus. 16(6):e61680. 13Vaira Enhancing chatbot healthcare: SMART prompt structure OTO Open. 2025; 9(1): e70075. https://doi.org/10.21203/rs.3.rs-4953716/v1. 10.1002/oto2.70075 Early ViewOnline Version Record before inclusion issue ReferencesRelatedInformation

Язык: Английский

Процитировано

0

Artificial intelligence in obstructive sleep apnea: A bibliometric analysis DOI Creative Commons
Xing An, Jie Zhou, Qiang Xu

и другие.

Digital Health, Год журнала: 2025, Номер 11

Опубликована: Март 1, 2025

Objective To conduct a bibliometric analysis using VOSviewer and Citespace to explore the current applications, trends, future directions of artificial intelligence (AI) in obstructive sleep apnea (OSA). Methods On 13 September 2024, computer search was conducted on Web Science Core Collection dataset published between 1 January 2011, 30 August identify literature related application AI OSA. Visualization performed countries, institutions, journal sources, authors, co-cited citations, keywords Vosviewer Citespace, descriptive tables were created by Microsoft Excel 2021 software. Results A total 867 articles included this study. The number publications low stable from 2011 2016, with significant increase after 2017. China had highest publications. Alvarez, Daniel, Hornero, Roberto two most prolific authors. Universidad de Valladolid IEEE Journal Biomedical Health Informatics productive institution journal, respectively. top three authors terms co-citation frequency are Hassan, Ar, Young, T, Vicini, C. “Estimation global prevalence burden apnoea: literature-based analysis” cited frequently. Keywords such as “OSA,” “machine learning,” “Electrocardiography,” “deep learning” dominant. Conclusion AI's OSA research is expanding. This study indicates that AI, particularly deep learning, will continue be key area, focusing diagnosis, identification, personalized treatment, prognosis assessment, telemedicine, management. Future efforts should enhance international cooperation interdisciplinary communication maximize potential advancing research, comprehensively empowering health, bringing more precise, convenient, medical services patients ushering new era health.

Язык: Английский

Процитировано

0