Journal of Clinical Medicine, Journal Year: 2025, Volume and Issue: 14(5), P. 1453 - 1453
Published: Feb. 21, 2025
Background: Although spinal cord stimulation (SCS) is an effective treatment for managing chronic pain, many patients have understandable questions and concerns regarding this therapy. Artificial intelligence (AI) has shown promise in delivering patient education healthcare. This study evaluates the reliability, accuracy, comprehensibility of ChatGPT’s responses to common inquiries about SCS. Methods: Thirteen commonly asked SCS were selected based on authors’ clinical experience pain a targeted review materials relevant medical literature. The prioritized their frequency consultations, relevance decision-making SCS, complexity information typically required comprehensively address questions. These spanned three domains: pre-procedural, intra-procedural, post-procedural concerns. Responses generated using GPT-4.0 with prompt “If you physician, how would answer asking…”. independently assessed by 10 physicians two non-healthcare professionals Likert scale reliability (1–6 points), accuracy (1–3 points). Results: demonstrated strong (5.1 ± 0.7) (2.8 0.2), 92% 98% responses, respectively, meeting or exceeding our predefined thresholds. Accuracy was 2.7 0.3, 95% rated sufficiently accurate. General queries, such as “What stimulation?” are risks benefits?”, received higher scores compared technical like different types waveforms used SCS?”. Conclusions: ChatGPT can be implemented supplementary tool education, particularly addressing general procedural queries However, AI’s performance less robust highly nuanced
Language: Английский