Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT DOI Open Access

Yasir Abdelgadir,

Charat Thongprayoon,

Iasmina Craici

et al.

Healthcare, Journal Year: 2024, Volume and Issue: 13(1), P. 57 - 57

Published: Dec. 31, 2024

Background/Objectives: It is often challenging for patients to understand treatment options, their mechanisms of action, and the potential side effects each option glomerular disorders. This study explored ability ChatGPT simplify these options enhance patient understanding. Methods: GPT-4 was queried on sixty-seven disorders using two distinct queries a general explanation an adjusted 8th grade level or lower. Accuracy rated scale 1 (incorrect) 5 (correct comprehensive). Readability measured average Flesch–Kincaid Grade (FKG) SMOG indices, along with Flesch Reading Ease (FRE) score. The understandability score (%) determined Patient Education Materials Assessment Tool Printable (PEMAT-P). Results: GPT-4’s explanations had readability 12.85 ± 0.93, corresponding upper end high school. When tailored at below 8th-grade level, improved middle school 8.44 0.72. FRE PEMAT-P scores also reflected understandability, increasing from 25.73 6.98 60.75 4.56 60.7% 76.8% (p < 0.0001 both), respectively. accuracy significantly lower compared (3.99 0.39 versus 0.66, p 0.0001). Conclusions: shows significant enhancing disorder therapies patients, but cost reduced comprehensiveness. Further research needed refine performance, evaluate real-world impact, ensure ethical use in healthcare settings.

Language: Английский

Accuracy and Readability of ChatGPT Responses to Patient-Centric Strabismus Questions DOI

Ashlyn A. Gary,

James Lai, Elyana V. T. Locatelli

et al.

Journal of Pediatric Ophthalmology & Strabismus, Journal Year: 2025, Volume and Issue: unknown, P. 1 - 8

Published: Feb. 19, 2025

Purpose To assess the medical accuracy and readability of responses provided by ChatGPT (OpenAI), most widely used artificial intelligence–powered chat-bot, regarding questions about strabismus. Methods Thirty-four were input into 3.5 (free version) 4.0 (paid at three time intervals (day 0, 1 week, month) in two distinct geographic locations (California Florida) March 2024. Two pediatric ophthalmologists rated as “acceptable,” “accurate but missing key information or minor inaccuracies,” “inaccurate potentially harmful.” The online tool, Readable, measured Flesch-Kincaid Grade Level Flesch Reading Ease Score to readability. Results Overall, 64% “acceptable;” proportion “acceptable” differed version (47% for vs 53% 4.0, P < .05) state (77% California 51% Florida, .001). Responses Florida more likely be harmful” compared those (6.9% vs. 1.5%, Over month, overall percentage increased (60% day 67% > .05), whereas decreased (5% 5% 3% .05). On average, scored a score 15, equating higher than high school grade reading level. Conclusions Although ChatGPT's strabismus clinically acceptable, there variations across regions. average level exceeded demonstrated low demonstrates potential supplementary resource parents patients with strabismus, improving free versions may increase its utility. [ J Pediatr Ophthalmol Strabismus . 20XX;X(X):XXX–XXX.]

Language: Английский

Citations

0

Applications of Natural Language Processing in Otolaryngology: A Scoping Review DOI Creative Commons
Norbert Banyi, Biao Ma, Ameen Amanian

et al.

The Laryngoscope, Journal Year: 2025, Volume and Issue: unknown

Published: May 1, 2025

To review the current literature on applications of natural language processing (NLP) within field otolaryngology. MEDLINE, EMBASE, SCOPUS, Cochrane Library, Web Science, and CINAHL. The preferred reporting Items for systematic reviews meta-analyzes extension scoping checklist was followed. Databases were searched from date inception up to Dec 26, 2023. Original articles application language-based models otolaryngology patient care research, regardless publication date, included. studies classified under 2011 Oxford CEBM levels evidence. One-hundred sixty-six papers with a median year 2024 (range 1982, 2024) Sixty-one percent (102/166) used ChatGPT published in 2023 or 2024. Sixty NLP clinical education decision support, 42 education, 14 electronic medical record improvement, 5 triaging, 4 trainee monitoring, 3 telemedicine, 1 translation. For 37 extraction, classification, analysis data, 17 thematic analysis, evaluating scientific reporting, manuscript preparation. role is evolving, passing OHNS board simulations, though its requires improvement. shows potential post-treatment monitoring. effective at extracting data unstructured large sets. There limited research administrative tasks. Guidelines use are critical.

Language: Английский

Citations

0

Readability of Hospital Online Patient Education Materials Across Otolaryngology Specialties DOI Creative Commons
Akshay Warrier, Rohan Bir Singh, Afash Haleem

et al.

Laryngoscope Investigative Otolaryngology, Journal Year: 2025, Volume and Issue: 10(1)

Published: Feb. 1, 2025

ABSTRACT Introduction This study evaluates the readability of online patient education materials (OPEMs) across otolaryngology subspecialties, hospital characteristics, and national organizations, while assessing AI alternatives. Methods Hospitals from US News Best ENT list were queried for OPEMs describing a chosen surgery per subspecialty; American Academy Otolaryngology—Head Neck Surgery (AAO), Laryngological Association (ALA), Ear, Nose, Throat United Kingdom (ENTUK), Canadian Society (CSOHNS) similarly queried. Google was top 10 links hospitals procedure. Ownership (private/public), presence respective fellowships, region, median household income (zip code) collected. Readability assessed using seven indices averaged: Automated Index (ARI), Flesch Reading Ease Score (FRES), Flesch–Kincaid Grade Level (FKGL), Gunning Fog (GFR), Simple Measure Gobbledygook (SMOG), Coleman–Liau (CLRI), Linsear Write Formula (LWRF). AI‐generated ChatGPT compared readability, accuracy, content, tone. Analyses conducted between against NIH standard, demographic variables. Results Across 144 hospitals, exceeded standards, averaging at an 8th–12th grade level subspecialties. In rhinology, facial plastics, sleep medicine, had higher scores than ENTUK's (11.4 vs. 9.1, 10.4 7.2, 11.5 9.2, respectively; all p < 0.05), but lower AAO ( = 0.005). ChatGPT‐generated averaged 6.8‐grade level, demonstrating improved especially with specialized prompting, to organization OPEMs. Conclusion sources exceed standard. ENTUK serves as benchmark accessible language, demonstrates feasibility producing more readable content. Otolaryngologists might consider generate patient‐friendly materials, caution, advocate national‐level improvements in readability.

Language: Английский

Citations

0

Potential role of large language models and personalized medicine to innovate cardiac rehabilitation DOI
R. Mishra, Hersh Patel,

Aleena Jamal

et al.

World Journal of Clinical Cases, Journal Year: 2025, Volume and Issue: 13(19)

Published: March 18, 2025

Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes. There growing body of evidence that suggests these programs contribute towards reducing cardiovascular mortality and recurrence. Despite this, cardiac underutilized adherence has been demonstrated barrier in achieving As result, there focus on innovating programs, especially from the standpoint digital health personalized medicine. This editorial discusses possible roles large language models, such as their role ChatGPT, further personalizing through simplifying medical jargon employing motivational interviewing techniques, thus boosting engagement adherence. However, possibilities must be investigated clinical literature. Likewise, integration models will challenging its nascent stages ensure accurate ethical information delivery.

Language: Английский

Citations

0

Employing large language models safely and effectively as a practicing neurosurgeon DOI Creative Commons
Advait Patil, Paul Serrato,

Gracie Cleaver

et al.

Acta Neurochirurgica, Journal Year: 2025, Volume and Issue: 167(1)

Published: April 9, 2025

Large Language Models (LLMs) have demonstrated significant capabilities to date in working with a neurosurgical knowledge-base and the potential enhance practice education. However, their role clinical workspace is still being actively explored. As many neurosurgeons seek incorporate this technology into local environments, we explore pertinent questions about how deploy these systems safe efficacious manner. The authors performed literature search of LLM studies neurosurgery PubMed database ("LLM" "neurosurgery"). Papers were reviewed for use cases, considerations taken selection specific LLMs, challenges encountered, including processing private health information. provide review core principles underpinning model selection, technical such as access, context windows, multimodality, retrieval-augmented generation, benchmark performance, well relative advantages current LLMs. Additionally, discuss safety paths institutional support inference on data. resulting discussion forms framework key dimensions employing LLMs should consider. present promising opportunities advance practice, but adoption necessitates careful consideration technical, ethical, regulatory hurdles. By thoughtfully evaluating deployment approaches, compliance requirements, can leverage benefits while minimizing risks.

Language: Английский

Citations

0

Improving Accessibility to Facial Plastic and Reconstructive Surgery Patient Resources Using Artificial Intelligence: A Pilot Study in Patient Education Materials DOI
Ariana L. Shaari,

S Bhalla,

Parsa P. Salehi

et al.

Facial Plastic Surgery & Aesthetic Medicine, Journal Year: 2025, Volume and Issue: unknown

Published: April 16, 2025

Background: The applications of artificial intelligence (AI) are evolving, offering new opportunities to enhance patient care. Objective: To determine whether the use AI platforms for translating education materials (PEMs) improves their readability patients seeking information on facial plastic and reconstructive surgery (FPRS) procedures. Methods: Text from 25 PEMs topics such as rhytidectomy, rhinoplasty, blepharoplasty was extracted. ChatGPT 4.o, 3.5, Microsoft Copilot, Google Gemini were prompted translate AAFPRS 6th-grade reading level, accepted standard PEMs. Readability determined using Flesch Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Reading Ease (FKRE). Statistical analysis performed. Results: A total 125 reviewed. Original had a mean FKGL, GFI, FKRE 10.7, 13.48, 50.8 respectively, which exceed recommended level. translated AI-generated 8.41, 10.62, 64.43 representing an improvement in (p < 0.001). Conclusion: With physician supervision, can improve common FPRS This strategy may increase accessibility educational resources diverse populations.

Language: Английский

Citations

0

Readability rescue: large language models may improve readability of patient education materials DOI
Alyssa Breneman, Megan H. Trager, Emily R. Gordon

et al.

Archives of Dermatological Research, Journal Year: 2024, Volume and Issue: 316(9)

Published: Oct. 10, 2024

Language: Английский

Citations

1

Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT DOI Open Access

Yasir Abdelgadir,

Charat Thongprayoon,

Iasmina Craici

et al.

Healthcare, Journal Year: 2024, Volume and Issue: 13(1), P. 57 - 57

Published: Dec. 31, 2024

Background/Objectives: It is often challenging for patients to understand treatment options, their mechanisms of action, and the potential side effects each option glomerular disorders. This study explored ability ChatGPT simplify these options enhance patient understanding. Methods: GPT-4 was queried on sixty-seven disorders using two distinct queries a general explanation an adjusted 8th grade level or lower. Accuracy rated scale 1 (incorrect) 5 (correct comprehensive). Readability measured average Flesch–Kincaid Grade (FKG) SMOG indices, along with Flesch Reading Ease (FRE) score. The understandability score (%) determined Patient Education Materials Assessment Tool Printable (PEMAT-P). Results: GPT-4’s explanations had readability 12.85 ± 0.93, corresponding upper end high school. When tailored at below 8th-grade level, improved middle school 8.44 0.72. FRE PEMAT-P scores also reflected understandability, increasing from 25.73 6.98 60.75 4.56 60.7% 76.8% (p < 0.0001 both), respectively. accuracy significantly lower compared (3.99 0.39 versus 0.66, p 0.0001). Conclusions: shows significant enhancing disorder therapies patients, but cost reduced comprehensiveness. Further research needed refine performance, evaluate real-world impact, ensure ethical use in healthcare settings.

Language: Английский

Citations

1