Comparing Patient’s Confidence in Clinical Capabilities in Urology: Large Language Models Versus Urologists DOI Creative Commons

Nicolas Carl,

Lisa Nguyen, Sarah Haggenmüller

et al.

European Urology Open Science, Journal Year: 2024, Volume and Issue: 70, P. 91 - 98

Published: Oct. 23, 2024

Language: Английский

Large language model use in clinical oncology DOI Creative Commons

Nicolas Carl,

Franziska Schramm,

Sarah Haggenmüller

et al.

npj Precision Oncology, Journal Year: 2024, Volume and Issue: 8(1)

Published: Oct. 23, 2024

Large language models (LLMs) are undergoing intensive research for various healthcare domains. This systematic review and meta-analysis assesses current applications, methodologies, the performance of LLMs in clinical oncology. A mixed-methods approach was used to extract, summarize, compare methodological approaches outcomes. includes 34 studies. primarily evaluated on their ability answer oncologic questions across The highlights a significant variance, influenced by diverse methodologies evaluation criteria. Furthermore, differences inherent model capabilities, prompting strategies, oncological subdomains contribute heterogeneity. lack use standardized LLM-specific reporting protocols leads disparities, which must be addressed ensure comparability LLM ultimately leverage reliable integration technologies into practice.

Language: Английский

Citations

6

AI-generated cancer prevention influencers can target risk groups on social media at low cost DOI Creative Commons
Jana T. Winterstein,

J. Abels,

Anna Kuehn

et al.

European Journal of Cancer, Journal Year: 2025, Volume and Issue: 217, P. 115251 - 115251

Published: Jan. 18, 2025

This study explores the potential of Artificial Intelligence (AI)-generated social media influencers to disseminate cancer prevention messages. Utilizing a Generative AI (GenAI) application, we created virtual persona, "Wanda", promote awareness on Instagram. We five posts, addressing most modifiable risk factors for cancer: tobacco consumption, unhealthy diet, sun exposure, alcohol and Human Papillomavirus (HPV) infection. To amplify campaign's reach, posts were boosted using custom-targeted as well an automated advertisement algorithm. An overall budget €100 was equally distributed between two algorithms. Campaign performance assessed based number users reached age distribution audience. The campaign achieved total 9902 recognitions, with cost-efficiency analysis revealing average expenditure €0.013 per reach. economical intervention cost only €0.006 In comparing strategies, observed similar reach but noted differences in demographics Our findings underscore combining generative strategically targeted messages effectively, minimal time financial investment. discuss chances presented by GenAI applications health communication, their implication, impact parasocial relationships content perception. highlights AI-driven scalable tools digital communication.

Language: Английский

Citations

0

Generative AI Chatbots for Reliable Cancer Information: Evaluating web-search, multilingual, and reference capabilities of emerging large language models DOI Creative Commons
Bradley D. Menz, Natansh D. Modi, Ahmad Y. Abuhelwa

et al.

European Journal of Cancer, Journal Year: 2025, Volume and Issue: 218, P. 115274 - 115274

Published: Feb. 4, 2025

Recent advancements in large language models (LLMs) enable real-time web search, improved referencing, and multilingual support, yet ensuring they provide safe health information remains crucial. This perspective evaluates seven publicly accessible LLMs-ChatGPT, Co-Pilot, Gemini, MetaAI, Claude, Grok, Perplexity-on three simple cancer-related queries across eight languages (336 responses: English, French, Chinese, Thai, Hindi, Nepali, Vietnamese, Arabic). None of the 42 English responses contained clinically meaningful hallucinations, whereas 7 294 non-English did. 48 % (162/336) included valid references, but 39 references were.com links reflecting quality concerns. frequently exceeded an eighth-grade level, many outputs were also complex. These findings reflect substantial progress over past 2-years reveal persistent gaps accuracy, reliable reference inclusion, referral practices, readability. Ongoing benchmarking is essential to ensure LLMs safely support global dichotomy meet online standards.

Language: Английский

Citations

0

Benchmarking Vision Capabilities of Large Language Models in Surgical Examination Questions DOI Creative Commons
Jean‐Paul Bereuter,

Mark Enrik Geissler,

Anna Klimová

et al.

Journal of surgical education, Journal Year: 2025, Volume and Issue: 82(4), P. 103442 - 103442

Published: Feb. 9, 2025

Recent studies investigated the potential of large language models (LLMs) for clinical decision making and answering exam questions based on text input. developments LLMs have extended these with vision capabilities. These image processing are called vision-language (VLMs). However, there is limited investigation applicability VLMs their capabilities content. Therefore, aim this study was to examine performance publicly accessible in 2 different surgical question sets consisting questions. Original from subsets German Medical Licensing Examination (GMLE) United States (USMLE) were collected answered by available (GPT-4, Claude-3 Sonnet, Gemini-1.5). LLM outputs benchmarked accuracy Additionally, LLMs' compared students' average historical (AHP) exams. Moreover, variations analyzed relation difficulty respective type. Overall, all achieved scores equivalent passing grades (≥60%) across both datasets. On image-based questions, only GPT-4 exceeded score required pass, significantly outperforming Gemini-1.5 (GPT: 78% vs. Claude-3: 58% Gemini-1.5: 57.3%; p < 0.001). outperformed students 83.7% AHP students: 67.8%; 0.001) 67.4%; demonstrated substantial it holds considerable use education trainee surgeons.

Language: Английский

Citations

0

Evaluating interactions of patients with large language models for medical information DOI Creative Commons
Nicolas Carl, Sarah Haggenmüller, Christoph Wies

et al.

BJU International, Journal Year: 2025, Volume and Issue: unknown

Published: Feb. 18, 2025

To explore the interaction of real-world patients with a chatbot in clinical setting, investigating key aspects medical information provided by large language models (LLMs). The study enrolled 300 seeking urological counselling between February and July 2024. First, participants voluntarily conversed Generative Pre-trained Transformer 4 (GPT-4) powered to ask questions related their situation. In following survey, rated perceived utility, completeness, understandability during simulated conversation as well user-friendliness. Finally, were asked which, experience, best answered questions: LLMs, urologists, or search engines. A total 292 completed study. majority providing useful, complete, understandable information, being user-friendly. However, ability human urologists answer an way was higher than LLMs. Interestingly, 53% question-answering LLMs Age not associated preferences. Limitations include social desirability sampling biases. This highlights potential enhance patient education communication settings, valuing user-friendliness comprehensiveness for information. By addressing preliminary questions, could potentially relieve time constraints on healthcare providers, enabling personnel focus complex inquiries care.

Language: Английский

Citations

0

Evaluating AI Proficiency in Nuclear Cardiology: Large Language Models take on the Board Preparation Exam DOI

Valerie Builoff,

Aakash Shanbhag,

Robert JH Miller

et al.

medRxiv (Cold Spring Harbor Laboratory), Journal Year: 2024, Volume and Issue: unknown

Published: July 16, 2024

Previous studies evaluated the ability of large language models (LLMs) in medical disciplines; however, few have focused on image analysis, and none specifically cardiovascular imaging or nuclear cardiology.

Language: Английский

Citations

3

Comparing Patient’s Confidence in Clinical Capabilities in Urology: Large Language Models Versus Urologists DOI Creative Commons

Nicolas Carl,

Lisa Nguyen, Sarah Haggenmüller

et al.

European Urology Open Science, Journal Year: 2024, Volume and Issue: 70, P. 91 - 98

Published: Oct. 23, 2024

Language: Английский

Citations

2