The computer will see you now: ChatGPT and artificial intelligence large language models for health information in urology—an invited perspective DOI Open Access
Joseph Gabriel,

Lidia Shafik,

Elizabeth Vincent

et al.

Translational Andrology and Urology, Journal Year: 2023, Volume and Issue: 12(12), P. 1772 - 1774

Published: Dec. 1, 2023

Language: Английский

Accuracy, readability, and understandability of large language models for prostate cancer information to the public DOI Creative Commons
Jacob Hershenhouse,

Daniel Mokhtar,

Michael Eppler

et al.

Prostate Cancer and Prostatic Diseases, Journal Year: 2024, Volume and Issue: unknown

Published: May 14, 2024

Abstract Background Generative Pretrained Model (GPT) chatbots have gained popularity since the public release of ChatGPT. Studies evaluated ability different GPT models to provide information about medical conditions. To date, no study has assessed quality ChatGPT outputs prostate cancer related questions from both physician and perspective while optimizing for patient consumption. Methods Nine cancer-related questions, identified through Google Trends (Global), were categorized into diagnosis, treatment, postoperative follow-up. These processed using 3.5, responses recorded. Subsequently, these re-inputted create simplified summaries understandable at a sixth-grade level. Readability original layperson was validated readability tools. A survey conducted among urology providers (urologists urologists in training) rate accuracy, completeness, clarity 5-point Likert scale. Furthermore, two independent reviewers on correctness trifecta: decision-making sufficiency. Public assessment summaries’ understandability carried out Amazon Mechanical Turk (MTurk). Participants rated demonstrated their understanding multiple-choice question. Results GPT-generated output deemed correct by 71.7% 94.3% raters (36 urologists, 17 residents) across 9 scenarios. this as accurate 8 (88.9%) scenarios sufficient make decision Mean higher than ([original v. ChatGPT, mean (SD), p -value] Flesch Reading Ease: 36.5(9.1) 70.2(11.2), <0.0001; Gunning Fog: 15.8(1.7) 9.5(2.0), < 0.0001; Grade Level: 12.8(1.2) 7.4(1.7), Coleman Liau: 13.7(2.1) 8.6(2.4), 0.0002; Smog index: 11.8(1.2) 6.7(1.8), Automated Index: 13.1(1.4) 7.5(2.1), 0.0001). MTurk workers ( n = 514) (89.5–95.7%) correctly understood content (63.0–87.4%). Conclusion shows promise education contents, but technology is not designed delivering patients information. Prompting model respond with may enhance its utility when used GPT-powered chatbots.

Language: Английский

Citations

20

Performance of ChatGPT on the Taiwan urology board examination: insights into current strengths and shortcomings DOI
Chung-You Tsai,

Shang-Ju Hsieh,

Hung-Hsiang Huang

et al.

World Journal of Urology, Journal Year: 2024, Volume and Issue: 42(1)

Published: April 23, 2024

Language: Английский

Citations

10

Performance of ChatGPT on American Board of Surgery In-Training Examination Preparation Questions DOI
Catherine G. Tran, Jeremy Chang, Scott K. Sherman

et al.

Journal of Surgical Research, Journal Year: 2024, Volume and Issue: 299, P. 329 - 335

Published: May 23, 2024

Language: Английский

Citations

7

Investigating the clinical reasoning abilities of large language model GPT-4: an analysis of postoperative complications from renal surgeries DOI

Jessica Hsueh,

Daniel Nethala,

Shiva M. Singh

et al.

Urologic Oncology Seminars and Original Investigations, Journal Year: 2024, Volume and Issue: 42(9), P. 292.e1 - 292.e7

Published: May 6, 2024

Language: Английский

Citations

5

ChatGPT v4 outperforming v3.5 on cancer treatment recommendations in quality, clinical guideline, and expert opinion concordance DOI Creative Commons
Chung-You Tsai, Pai‐Yu Cheng, Juinn‐Horng Deng

et al.

Digital Health, Journal Year: 2024, Volume and Issue: 10

Published: Jan. 1, 2024

Objectives To assess the quality and alignment of ChatGPT's cancer treatment recommendations (RECs) with National Comprehensive Cancer Network (NCCN) guidelines expert opinions. Methods Three urologists performed quantitative qualitative assessments in October 2023 analyzing responses from ChatGPT-4 ChatGPT-3.5 to 108 prostate, kidney, bladder prompts using two zero-shot prompt templates. Performance evaluation involved calculating five ratios: expert-approved/expert-disagreed NCCN-aligned RECs against total ChatGPT plus coverage adherence rates NCCN. Experts rated response's on a 1-5 scale considering correctness, comprehensiveness, specificity, appropriateness. Results outperformed prostate inquiries, an average word count 317.3 versus 124.4 ( p < 0.001) 6.1 3.9 0.001). Its rater-approved REC ratio (96.1% vs. 89.4%) NCCN (76.8% 49.1%, = were superior scored significantly better all dimensions. Across covering three cancers, produced 6.0 per case, 88.5% approval rate raters, 86.7% concordance, only 9.5% disagreement rate. It achieved high marks correctness (4.5), comprehensiveness (4.4), specificity (4.0), appropriateness (4.4). Subgroup analyses across types, disease statuses, different templates reported. Conclusions demonstrated significant improvement providing accurate detailed for urological cancers line clinical opinion. However, it is vital recognize that AI tools are not without flaws should be utilized caution. could supplement, but replace, personalized advice healthcare professionals.

Language: Английский

Citations

4

Revolutionizing Medicine: Chatbots as Catalysts for Improved Diagnosis, Treatment, and Patient Support DOI Open Access
Syed Atif Abbas,

Izmir Yusifzada,

Sufia Athar

et al.

Cureus, Journal Year: 2025, Volume and Issue: unknown

Published: March 21, 2025

Chatbots have emerged as one of the revolutionary tools in healthcare that combine artificial intelligence to improve patient engagement diagnosis, treatment, monitoring, and support. In this review, application chatbots medicine over past two decades is discussed, focusing on their several aspects medicine, including oncology, psychiatry, chronic disease management. A total 38 clinically relevant studies were identified from a comprehensive literature search. These demonstrate efficacy early symptom detection, personalized treatment planning, emotional support has particular emphasis addressing mental health needs. The review focuses role improving communication gap between patients providers achieve improved access medical information adherence. findings underscore substantial promise enhancing delivery reveal areas need further exploration, particularly radiology advanced diagnostics. With continuous rise technological development, use expected become more central streamlining processes toward better outcomes enhanced literacy. This therefore, calls for continued investment incorporation these chatbot innovations systems full potential be realized care.

Language: Английский

Citations

0

Assessing the Quality and Reliability of ChatGPT’s Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4 DOI Creative Commons
Ana Monteiro Grilo,

Catarina Marques,

Maria Corte-Real

et al.

JMIR Cancer, Journal Year: 2025, Volume and Issue: 11, P. e63677 - e63677

Published: April 16, 2025

Abstract Background Patients frequently resort to the internet access information about cancer. However, these websites often lack content accuracy and readability. Recently, ChatGPT, an artificial intelligence–powered chatbot, has signified a potential paradigm shift in how patients with cancer can vast amounts of medical information, including insights into radiotherapy. quality provided by ChatGPT remains unclear. This is particularly significant given general public’s limited knowledge this treatment concerns its possible side effects. Furthermore, evaluating responses crucial, as misinformation foster false sense security, lead noncompliance, result delays receiving appropriate treatment. Objective study aims evaluate reliability ChatGPT’s common patient queries radiotherapy, comparing performance two versions: GPT-3.5 GPT-4. Methods We selected 40 commonly asked radiotherapy questions entered both versions ChatGPT. Response were evaluated 16 experts using General Quality Score (GQS), 5-point Likert scale, median GQS determined based on experts’ ratings. Consistency similarity assessed cosine score, which ranges from 0 (complete dissimilarity) 1 similarity). Readability was analyzed Flesch Reading Ease Score, ranging 100, Flesch-Kincaid Grade Level, reflecting average number years education required for comprehension. Statistical analyses performed Mann-Whitney test effect size, results deemed at 5% level ( P =.05). To assess agreement between experts, Krippendorff α Fleiss κ used. Results GPT-4 demonstrated superior performance, higher lower scores 2, compared GPT-3.5. The revealed statistically differences some questions, generally (IQR) score indicated substantial (0.81, IQR 0.05) consistency (GPT-3.5: 0.85, 0.04; GPT-4: 0.83, 0.04). considered college level, scoring slightly better (34.61) Level (12.32) (32.98 13.32, respectively). Responses challenging public. Conclusions Both having capability address concepts, showing performance. models present readability challenges population. Although demonstrates valuable resource addressing related it imperative acknowledge limitations, risks issues. In addition, implementation should be supported strategies enhance accessibility

Language: Английский

Citations

0

ChatGPT as a Clinical Decision Maker for Urolithiasis: Compliance with the Current European Association of Urology Guidelines DOI
Ali Talyshinskii, Patrick Juliebø‐Jones, B. M. Zeeshan Hameed

et al.

European Urology Open Science, Journal Year: 2024, Volume and Issue: 69, P. 51 - 62

Published: Sept. 17, 2024

Language: Английский

Citations

3

The impact of artificial intelligence (AI) in revolutionizing all aspects of urological care: a glimpse in the future DOI
Carlotta Nedbal, Ewa Bres–Niewada, Bartosz Dybowski

et al.

Editor-in-Chief s Voice List of Authors is an Important Element in a Scientific Publication, Journal Year: 2024, Volume and Issue: unknown

Published: Jan. 1, 2024

Language: Английский

Citations

2

ChatGPT in Urology: Bridging Knowledge and Practice for Tomorrow's Healthcare, A Comprehensive Review. DOI
Catalina Solano,

Nick Tarazona,

Gabriela Prieto Angarita

et al.

Journal of Endourology, Journal Year: 2024, Volume and Issue: 38(8), P. 763 - 777

Published: June 14, 2024

Among emerging AI technologies, Chat-Generative Pre-Trained Transformer (ChatGPT) emerges as a notable language model, uniquely developed through artificial intelligence research. Its proven versatility across various domains, from translation to healthcare data processing, underscores its promise within medical documentation, diagnostics, research, and education. The current comprehensive review aimed investigate the utility of ChatGPT in urology education practice highlight potential limitations.

Language: Английский

Citations

2