Assessing ChatGPT's educational potential in lung cancer radiotherapy: A readability, clinician, and patient evaluation (Preprint) DOI Creative Commons

Cedric Richlitzki,

Sina Mansoorian,

Lukas Käsmann

et al.

JMIR Cancer, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 8, 2024

Language: Английский

Integrating Large Language Models into Medication Management in Remote Healthcare: Current Applications, Challenges, and Future Prospects DOI Creative Commons

Ho Yan Kwan,

Jethro Shell, Conor Fahy

et al.

Systems, Journal Year: 2025, Volume and Issue: 13(4), P. 281 - 281

Published: April 10, 2025

The integration of large language models (LLMs) into remote healthcare has the potential to revolutionize medication management by enhancing communication, improving adherence, and supporting clinical decision-making. This study aims explore role LLMs in management, focusing on their impact. paper comprehensively reviews existing literature, medical LLM cases, commercial applications healthcare. It also addresses technical, ethical, regulatory challenges related use artificial intelligence (AI) this context. review methodology includes analyzing studies applications, comparing impact, identifying gaps for future research development. reveals that have shown significant communication between patients providers, adherence monitoring, decision-making management. Compared traditional reminder systems, AI systems a 14% higher rate rates pilot studies. However, there are notable challenges, including data privacy concerns, system issues, ethical dilemmas AI-driven decisions such as bias transparency. Overall, offers comprehensive analysis both transformative key be addressed. provides insights policymakers, researchers optimizing

Language: Английский

Citations

0

Assessing the Quality and Reliability of ChatGPT’s Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4 DOI Creative Commons
Ana Monteiro Grilo,

Catarina Marques,

Maria Corte-Real

et al.

JMIR Cancer, Journal Year: 2025, Volume and Issue: 11, P. e63677 - e63677

Published: April 16, 2025

Abstract Background Patients frequently resort to the internet access information about cancer. However, these websites often lack content accuracy and readability. Recently, ChatGPT, an artificial intelligence–powered chatbot, has signified a potential paradigm shift in how patients with cancer can vast amounts of medical information, including insights into radiotherapy. quality provided by ChatGPT remains unclear. This is particularly significant given general public’s limited knowledge this treatment concerns its possible side effects. Furthermore, evaluating responses crucial, as misinformation foster false sense security, lead noncompliance, result delays receiving appropriate treatment. Objective study aims evaluate reliability ChatGPT’s common patient queries radiotherapy, comparing performance two versions: GPT-3.5 GPT-4. Methods We selected 40 commonly asked radiotherapy questions entered both versions ChatGPT. Response were evaluated 16 experts using General Quality Score (GQS), 5-point Likert scale, median GQS determined based on experts’ ratings. Consistency similarity assessed cosine score, which ranges from 0 (complete dissimilarity) 1 similarity). Readability was analyzed Flesch Reading Ease Score, ranging 100, Flesch-Kincaid Grade Level, reflecting average number years education required for comprehension. Statistical analyses performed Mann-Whitney test effect size, results deemed at 5% level ( P =.05). To assess agreement between experts, Krippendorff α Fleiss κ used. Results GPT-4 demonstrated superior performance, higher lower scores 2, compared GPT-3.5. The revealed statistically differences some questions, generally (IQR) score indicated substantial (0.81, IQR 0.05) consistency (GPT-3.5: 0.85, 0.04; GPT-4: 0.83, 0.04). considered college level, scoring slightly better (34.61) Level (12.32) (32.98 13.32, respectively). Responses challenging public. Conclusions Both having capability address concepts, showing performance. models present readability challenges population. Although demonstrates valuable resource addressing related it imperative acknowledge limitations, risks issues. In addition, implementation should be supported strategies enhance accessibility

Language: Английский

Citations

0

Enhancing Pulmonary Disease Prediction Using Large Language Models with Feature Summarization and Hybrid Retrieval-Augmented Generation: Multicenter Methodological Study based on Radiology Report (Preprint) DOI Creative Commons

Ruiteng Li,

Shuai Mao, Congmin Zhu

et al.

Journal of Medical Internet Research, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 13, 2025

Language: Английский

Citations

0

Medical accuracy of artificial intelligence chatbots in oncology: a scoping review DOI Creative Commons
David Chen,

Kate Elizabeth Avison,

Saif Addeen Alnassar

et al.

The Oncologist, Journal Year: 2025, Volume and Issue: 30(4)

Published: March 29, 2025

Abstract Background Recent advances in large language models (LLM) have enabled human-like qualities of natural competency. Applied to oncology, LLMs been proposed serve as an information resource and interpret vast amounts data a clinical decision-support tool improve outcomes. Objective This review aims describe the current status medical accuracy oncology-related LLM applications research trends for further areas investigation. Methods A scoping literature search was conducted on Ovid Medline peer-reviewed studies published since 2000. We included primary that evaluated model applied oncology settings. Study characteristics outcomes were extracted landscape LLMs. Results Sixty based inclusion exclusion criteria. The majority health question-answer style examinations (48%), followed by diagnosis (20%) management (17%). number utility fine-tuning prompt-engineering increased over time from 2022 2024. Studies reported advantages accurate resource, reduction clinician workload, improved accessibility readability information, while noting disadvantages such poor reliability, hallucinations, need oversight. Discussion There exists significant interest application with particular focus decision support tool. However, is needed validate these tools external hold-out datasets generalizability across diverse scenarios, underscoring supervision tools.

Language: Английский

Citations

0

Assessing ChatGPT for clinical decision-making in radiation oncology, with open-ended questions and images DOI
Wei-Kai Chuang, Yung‐Shuo Kao, Yen‐Ting Liu

et al.

Practical Radiation Oncology, Journal Year: 2025, Volume and Issue: unknown

Published: April 1, 2025

Language: Английский

Citations

0

The potential of large language models to advance precision oncology DOI Creative Commons

Shufan Liang,

Jiangjiang Zhang, Xingting Liu

et al.

EBioMedicine, Journal Year: 2025, Volume and Issue: 115, P. 105695 - 105695

Published: April 29, 2025

Language: Английский

Citations

0

Evaluation of AI ChatBots for the Creation of Patient-Informed Consent Sheets DOI Creative Commons
Florian J. Raimann, Vanessa Neef,

Marie Hennighausen

et al.

Machine Learning and Knowledge Extraction, Journal Year: 2024, Volume and Issue: 6(2), P. 1145 - 1153

Published: May 24, 2024

Introduction: Large language models (LLMs), such as ChatGPT, are a topic of major public interest, and their potential benefits threats subject discussion. The contribution these to health care is widely discussed. However, few studies date have examined LLMs. For example, the use LLMs in (individualized) informed consent remains unclear. Methods: We analyzed performance ChatGPT 3.5, 4.0, Gemini with regard ability create an information sheet for six basic anesthesiologic procedures response corresponding questions. performed multiple attempts forms anesthesia results checklists based on existing standard sheets. Results: None tested were able legally compliant any procedure. Overall, fewer than one-third risks, procedural descriptions, preparations listed covered by Conclusions: There clear limitations current terms practical application. Advantages generation patient-adapted risk stratification within individual not available at moment, although further development difficult predict.

Language: Английский

Citations

3

Testing and Validation of a Custom Retrained Large Language Model for the Supportive Care of HN Patients with External Knowledge Base DOI Open Access
Libing Zhu, Yi Rong,

L.A. McGee

et al.

Cancers, Journal Year: 2024, Volume and Issue: 16(13), P. 2311 - 2311

Published: June 24, 2024

This study aimed to develop a retrained large language model (LLM) tailored the needs of HN cancer patients treated with radiotherapy, emphasis on symptom management and survivorship care.

Language: Английский

Citations

1

Comparing ChatGPT-3.5 and ChatGPT-4’s Alignments with the German evidence-based S3 Guideline for Adult Soft Tissue Sarcoma DOI Creative Commons
Chengpeng Li, Jens Jakob,

Franka Menge

et al.

iScience, Journal Year: 2024, Volume and Issue: 27(12), P. 111493 - 111493

Published: Nov. 29, 2024

Language: Английский

Citations

1

Large language models as an academic resource for radiologists stepping into artificial intelligence research DOI
Satvik Tripathi, Jay Patel,

Liam Mutter

et al.

Current Problems in Diagnostic Radiology, Journal Year: 2024, Volume and Issue: unknown

Published: Dec. 1, 2024

Language: Английский

Citations

1