Can Large Language Models facilitate evidence-based decision support for conservation? DOI Creative Commons
Alec P. Christie,

Raja K. Iyer,

Anil Madhavapeddy

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Ноя. 13, 2024

Abstract Wise use of evidence to support efficient conservation action is key tackling biodiversity loss with limited time and resources. Evidence syntheses provide recommendations for decision-makers by assessing summarising evidence, but are not always easy access, digest, use. Recent advances in Large Language Models (LLMs) present both opportunities risks enabling faster more intuitive access databases. We evaluated the performance ten LLMs (and three retrieval strategies) versus six human experts answering synthetic multiple choice question exams on effects interventions using Conservation database. found that open-book LLM was competitive 45 filtered questions, correctly them retrieving document used generate them. Across 1867 unfiltered closed-book demonstrated a level conservation-specific knowledge, did vary across topic areas. Hybrid performed substantially better than dense sparse methods, whilst recent older ones. Our findings suggest that, careful design, could potentially be powerful tools expert-level However, general ‘out-of-the-box’ likely perform poorly misinform decision-makers.

Язык: Английский

Assessing the accuracy of the GPT-4 model in multidisciplinary tumor board decision prediction DOI Creative Commons
Efe Cem Erdat, Merih Yalçıner, Mehmet Berk Örüncü

и другие.

Clinical & Translational Oncology, Год журнала: 2025, Номер unknown

Опубликована: Март 25, 2025

Язык: Английский

Процитировано

1

Generative artificial intelligence in oncology DOI
Conner Ganjavi,

Sam Melamed,

Brett Biedermann

и другие.

Current Opinion in Urology, Год журнала: 2025, Номер unknown

Опубликована: Март 3, 2025

Purpose of review By leveraging models such as large language (LLMs) and generative computer vision tools, artificial intelligence (GAI) is reshaping cancer research oncologic practice from diagnosis to treatment follow-up. This timely provides a comprehensive overview the current applications future potential GAI in oncology, including urologic malignancies. Recent findings has demonstrated significant improving by integrating multimodal data, diagnostic workflows, assisting imaging interpretation. In treatment, shows promise aligning clinical decisions with guidelines, optimizing systemic therapy choices, aiding patient education. Posttreatment, include streamlining administrative tasks, follow-up care, monitoring adverse events. image analysis, data extraction, outcomes research. Future developments could stimulate discovery, improve efficiency, enhance patient-physician relationship. Summary Integration into oncology shown some ability accuracy, optimize decisions, ultimately strengthening Despite these advancements, inherent stochasticity GAI's performance necessitates human oversight, more specialized models, proper physician training, robust guidelines ensure its well tolerated effective integration practice.

Язык: Английский

Процитировано

1

NLP for Computational Insights into Nutritional Impacts on Colorectal Cancer Care DOI Creative Commons
Shaogang Gong, Xiaohong Jin, Yujie Guo

и другие.

SLAS TECHNOLOGY, Год журнала: 2025, Номер 32, С. 100295 - 100295

Опубликована: Апрель 17, 2025

Colorectal cancer (CRC) is one of the most prominent cancers globally, with its incidence rising among younger adults due to improved screening practices. However, existing algorithms for CRC prediction are frequently trained on datasets that primarily reflect older persons, thus limiting their usefulness in more diverse populations. Additionally, part nutrition deterrence and management gaining significant attention, although computational approaches analyzing impact diet remain underdeveloped. This research introduces Nutritional Impact Prediction Framework (NICRP-Framework), which combines Natural Language Processing (NLP) techniques Adaptive Tunicate Swarm Optimized Large Models (ATSO-LLMs) present important insights into care across The colorectal dietary lifestyle dataset, encompassing >1000 participants, collected from multiple regions sources. dataset includes structured unstructured data, including textual descriptions food ingredients. These processed using standardization techniques, such as stop word removal, lowercasing, punctuation elimination. Relevant terms then extracted visualized a cloud. also contained an imbalanced binary outcome, rebalanced utilizing random oversampling. ATSO-LLMs employed analyze identifying key nutritional factors forecasting non-CRC phenotypes based patterns. results show combining NLP-derived features significantly enhances accuracy (98.4 %), sensitivity (97.6 %) specificity (96.9 F1-Score (96.2 minimal misclassification rates. framework represents transformative advancement life science by offering new, data-driven approach understanding determinants CRC, empowering healthcare professionals make precise predictions adapted interventions

Язык: Английский

Процитировано

0

Careful design of Large Language Model pipelines enables expert-level retrieval of evidence-based information from syntheses and databases DOI Creative Commons

Raja K. Iyer,

Alec P. Christie, Anil Madhavapeddy

и другие.

PLoS ONE, Год журнала: 2025, Номер 20(5), С. e0323563 - e0323563

Опубликована: Май 15, 2025

Wise use of evidence to support efficient conservation action is key tackling biodiversity loss with limited time and resources. Evidence syntheses provide recommendations for decision-makers by assessing summarising evidence, but are not always easy access, digest, use. Recent advances in Large Language Models (LLMs) present both opportunities risks enabling faster more intuitive systems access databases. Such natural language search open-ended evidence-based responses pipelines comprising many components. Most critical these components the LLM used how retrieved from database. We evaluate performance ten LLMs across six different database retrieval strategies against human experts answering synthetic multiple-choice question exams on effects interventions using Conservation found that was comparable over 45 filtered questions, correctly them retrieving document generate them. Across 1867 unfiltered demonstrated a level conservation-specific knowledge, this varied topic areas. A hybrid strategy combines keywords vector embeddings performed best substantial margin. also tested state-of-the-art previous generation which outperformed all current models – including smaller, cheaper models. Our findings suggest that, careful domain-specific design, could potentially be powerful tools expert-level databases disciplines. However, general ‘out-of-the-box’ likely perform poorly misinform decision-makers. By establishing exhibit synthesis providing restricted queries databases, future work can build our approach quantify responses.

Язык: Английский

Процитировано

0

Careful design of Large Language Model pipelines enables expert-level retrieval of evidence-based information from conservation syntheses DOI Creative Commons

Raja K. Iyer,

Alec P. Christie, Anil Madhavapeddy

и другие.

Research Square (Research Square), Год журнала: 2025, Номер unknown

Опубликована: Янв. 23, 2025

Abstract Wise use of evidence to support efficient conservation action is key tackling biodiversity loss with limited time and resources. Evidence syntheses provide recommendations for decision-makers by assessing summarising evidence, but are not always easy access, digest, use. Recent advances in Large Language Models (LLMs) present both opportunities risks enabling faster more intuitive systems access databases. Such natural language search open-ended evidence-based responses pipelines comprising many components. Most critical these components the LLM used how retrieved from database. We evaluate performance ten LLMs across six different database retrieval strategies against human experts answering synthetic multiple-choice question exams on effects interventions using Conservation found that was comparable over 45 filtered questions, correctly them retrieving document generate them. Across 1867 unfiltered demonstrated a level conservation-specific knowledge, this varied topic areas. A hybrid strategy combines keywords vector embeddings performed best substantial margin. also tested state-of-the-art previous generation which outperformed all current models - including smaller, cheaper models. Our findings suggest that, careful domain-specific design, could potentially be powerful tools expert-level However, general ‘out-of-the-box’ likely perform poorly misinform decision-makers. By establishing exhibit synthesis providing restricted queries databases, future work can build our approach quantify responses.

Язык: Английский

Процитировано

0

Artificial intelligence in healthcare education: evaluating the accuracy of ChatGPT, Copilot, and Google Gemini in cardiovascular pharmacology DOI Creative Commons
Ibrahim M. Salman, Omar Z. Ameer, Mohammad A. Khanfar

и другие.

Frontiers in Medicine, Год журнала: 2025, Номер 12

Опубликована: Фев. 19, 2025

Artificial intelligence (AI) is revolutionizing medical education; however, its limitations remain underexplored. This study evaluated the accuracy of three generative AI tools-ChatGPT-4, Copilot, and Google Gemini-in answering multiple-choice questions (MCQ) short-answer (SAQ) related to cardiovascular pharmacology, a key subject in healthcare education. Using free versions each tool, we administered 45 MCQs 30 SAQs across difficulty levels: easy, intermediate, advanced. AI-generated answers were reviewed by pharmacology experts. The MCQ responses was recorded as correct or incorrect, while SAQ rated on 1-5 scale based relevance, completeness, correctness. ChatGPT, Gemini demonstrated high scores easy intermediate (87-100%). While all models showed decline performance advanced section, only Copilot (53% accuracy) (20% had significantly lower compared their easy-intermediate levels. evaluations revealed for ChatGPT (overall 4.7 ± 0.3) 4.5 0.4) levels, with no significant differences between two tools. In contrast, Gemini's markedly levels 3.3 1.0). ChatGPT-4 demonstrates highest addressing both questions, regardless level. ranks second after shows handling complex providing accurate SAQ-type this field. These findings can guide ongoing refinement tools specialized

Язык: Английский

Процитировано

0

The Use of Artificial Intelligence for Cancer Therapeutic Decision-Making DOI
Olivier Elemento, Sean Khozin, Cora N. Sternberg

и другие.

NEJM AI, Год журнала: 2025, Номер unknown

Опубликована: Апрель 17, 2025

Язык: Английский

Процитировано

0

The Role of Artificial Intelligence (ChatGPT-4o) in Supporting Tumor Board Decisions DOI Open Access
Berkan Karabuğa, Cengiz Karaçin, Mustafa Büyükkör

и другие.

Journal of Clinical Medicine, Год журнала: 2025, Номер 14(10), С. 3535 - 3535

Опубликована: Май 18, 2025

Background/Objectives: Artificial intelligence (AI) has emerged as a promising field in the era of personalized oncology due to its potential save time and workforce while serving supportive tool patient management decisions. Although several studies literature have explored integration AI into practice across different tumor types, available data remain limited. In our study, we aimed evaluate role complex cancer cases by comparing decisions an in-house board ChatGPT-4o for patients with various types. Methods: A total 102 diverse types were included. Treatment follow-up proposed both independently evaluated two medical oncologists using 5-point Likert scale. Results: Analysis agreement levels showed high inter-rater reliability (κ = 0.722, p < 0.001 decisions; κ 0.794, ChatGPT decisions). However, concordance between was low, reflected assessments raters (Rater 1: 0.211, 0.003; Rater 2: 0.376, 0.001). Both more frequently agreed decisions, statistically significant difference observed Z +4.548, 0.001; +3.990, Conclusions: These findings suggest that AI, current form, is not yet capable functioning standalone decision-maker challenging cases. Clinical experience expert judgment most critical factors guiding care.

Язык: Английский

Процитировано

0

Can Large Language Models facilitate evidence-based decision support for conservation? DOI Creative Commons
Alec P. Christie,

Raja K. Iyer,

Anil Madhavapeddy

и другие.

Research Square (Research Square), Год журнала: 2024, Номер unknown

Опубликована: Ноя. 13, 2024

Abstract Wise use of evidence to support efficient conservation action is key tackling biodiversity loss with limited time and resources. Evidence syntheses provide recommendations for decision-makers by assessing summarising evidence, but are not always easy access, digest, use. Recent advances in Large Language Models (LLMs) present both opportunities risks enabling faster more intuitive access databases. We evaluated the performance ten LLMs (and three retrieval strategies) versus six human experts answering synthetic multiple choice question exams on effects interventions using Conservation database. found that open-book LLM was competitive 45 filtered questions, correctly them retrieving document used generate them. Across 1867 unfiltered closed-book demonstrated a level conservation-specific knowledge, did vary across topic areas. Hybrid performed substantially better than dense sparse methods, whilst recent older ones. Our findings suggest that, careful design, could potentially be powerful tools expert-level However, general ‘out-of-the-box’ likely perform poorly misinform decision-makers.

Язык: Английский

Процитировано

0