Assessing the Guidelines on the Use of Generative Artificial Intelligence Tools in Universities: A Survey of the World’s Top 50 Universities DOI Creative Commons
Midrar Ullah, Salman Bin Naeem, Maged N. Kamel Boulos

и другие.

Big Data and Cognitive Computing, Год журнала: 2024, Номер 8(12), С. 194 - 194

Опубликована: Дек. 18, 2024

The widespread adoption of Generative Artificial Intelligence (GenAI) tools in higher education has necessitated the development appropriate and ethical usage guidelines. This study aims to explore assess publicly available guidelines covering use GenAI universities, following a predefined checklist. We searched downloaded accessible on from websites top 50 universities globally, according 2025 QS university rankings. From literature guidelines, we created 24-item checklist, which was then reviewed by panel experts. checklist used characteristics retrieved Out explored, were sites 41 institutions. All these allowed for academic settings provided that specific instructions detailed followed. These encompassed securing instructor consent before utilization, identifying inappropriate instances deployment, employing suitable strategies classroom assessment, appropriately integrating results, acknowledging crediting tools, adhering data privacy security measures. However, our found only small number offered AI algorithm (understanding how it works), documentation prompts outputs, detection mechanisms reporting misconduct. Higher institutions should develop comprehensive policies responsible tools. must be frequently updated stay line with fast-paced evolution technologies their applications within sphere.

Язык: Английский

Crucial Role of Understanding in Human-Artificial Intelligence Interaction for Successful Clinical Adoption DOI
Seong Ho Park, Curtis P. Langlotz

Korean Journal of Radiology, Год журнала: 2025, Номер 26

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

3

The generative revolution: AI foundation models in geospatial health—applications, challenges and future research DOI Creative Commons
Bernd Resch, Polychronis Kolokoussis, David Hanny

и другие.

International Journal of Health Geographics, Год журнала: 2025, Номер 24(1)

Опубликована: Апрель 2, 2025

Язык: Английский

Процитировано

1

Reflections on 2024 and Perspectives for 2025 for KJR DOI
Seong Ho Park

Korean Journal of Radiology, Год журнала: 2025, Номер 26(1), С. 1 - 1

Опубликована: Янв. 1, 2025

Язык: Английский

Процитировано

0

Editor’s Note 2024: The Year in Review for Radiology DOI
Linda Moy

Radiology, Год журнала: 2025, Номер 314(3)

Опубликована: Март 1, 2025

Язык: Английский

Процитировано

0

Conversion of Mixed-Language Free-Text CT Reports of Pancreatic Cancer to National Comprehensive Cancer Network Structured Reporting Templates by Using GPT-4 DOI
Hokun Kim, Bohyun Kim, Moon Hyung Choi

и другие.

Korean Journal of Radiology, Год журнала: 2025, Номер 26

Опубликована: Янв. 1, 2025

To evaluate the feasibility of generative pre-trained transformer-4 (GPT-4) in generating structured reports (SRs) from mixed-language (English and Korean) narrative-style CT for pancreatic ductal adenocarcinoma (PDAC) to assess its accuracy categorizing PDCA resectability. This retrospective study included consecutive free-text pancreas-protocol staging PDAC, two institutions, written English or Korean January 2021 December 2023. Both GPT-4 Turbo GPT-4o models were provided prompts along with via an application programming interface tasked SRs tumor resectability according National Comprehensive Cancer Network guidelines version 2.2024. Prompts optimized using model 50 Institution B. The performances tasks evaluated 115 A. Results compared a reference standard that was manually derived by abdominal radiologist. Each report consecutively processed three times, most frequent response selected as final output. Error analysis guided decision rationale models. Of narrative tested, 96 (83.5%) contained both Korean. For SR generation, demonstrated comparable accuracies (92.3% [1592/1725] 92.2% [1590/1725], respectively; P = 0.923). In categorization, showed higher than (81.7% [94/115] vs. 67.0% [77/115], 0.002). error Turbo, generation rate 7.7% (133/1725 items), which primarily attributed inaccurate data extraction (54.1% [72/133]). categorization 18.3% (21/115), main cause being violation criteria (61.9% [13/21]). acceptable NCCN-based on PDACs reports. However, oversight human radiologists is essential determining based findings.

Язык: Английский

Процитировано

0

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations DOI
Arum Choi, Hyun Gi Kim, Moon Hyung Choi

и другие.

Korean Journal of Radiology, Год журнала: 2025, Номер 26

Опубликована: Янв. 1, 2025

Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed evaluate performance GPT-4 Turbo and GPT-4o in resident examinations, analyze differences across question types, compare results with those residents at different levels. A total 776 multiple-choice from Korean Society Radiology In-Training Examinations were used, forming two sets: one originally written other translated into English. We evaluated (gpt-4-turbo-2024-04-09) (gpt-4o-2024-11-20) on these temperature set zero, determining accuracy based majority vote five independent trials. analyzed using type (text-only vs. image-based) benchmarked them against nationwide residents' performance. The impact input (Korean or English) model was examined. outperformed both (48.2% 41.8%, P = 0.002) text-only (77.9% 69.0%, 0.031). On questions, showed comparable that 1st-year (41.8% 48.2%, respectively, 43.3%, 0.608 0.079, respectively) but lower than 2nd- 4th-year (vs. 56.0%-63.9%, all ≤ 0.005). For performed better years (69.0% 77.9%, 44.7%-57.5%, 0.039). Performance English- Korean-version no significant either (all ≥ 0.275). types. models' matched higher-year residents. Both demonstrated superior compared questions. consistent performances English inputs.

Язык: Английский

Процитировано

0

A Standard Framework for Converting Coronary Angiography Reports into Machine-Readable Format Using Large Language Models DOI Creative Commons

Song Jae-young,

Ji-Yong Jang, Hyeongsoo Kim

и другие.

medRxiv (Cold Spring Harbor Laboratory), Год журнала: 2025, Номер unknown

Опубликована: Май 6, 2025

ABSTRACT Background and Objectives Coronary angiography (CAG) reports contain many details about coronary anatomy, lesion characteristics, interventional procedures. However, their free-text format limits research utility. Therefore, we sought to develop validate a framework leveraging large language models (LLMs) convert CAG automatically into standardized structured format. Methods Using 50 from tertiary hospital, developed multi-step standardize extract key information reports. First, standard annotation schema was by cardiologists. Thereafter, an LLM (GPT-4o) converted the hierarchical in Finally, clinically relevant extracted schema. One hundred each of two hospitals were used for internal external test, respectively. The 12 points included four CAG-related (previous stent information, anatomical diagnosis) eight percutaneous intervention (PCI)-related (complex PCI criteria current information). For cardiologists independently with discrepancies resolved through consensus, as reference standard. Results Based on standard, proposed demonstrated superior accuracy (99.5% vs. 91.8%; p < 0.001) comparable PCI-related (98.3% 97.4%; = 0.512) test. External test confirmed high both CAG-(96.2%) (99.4%). Conclusions This excellent standardizing reports, potentially enabling more efficient utilization detailed clinical data cardiovascular research. Author’s Summary novel that standardizes report is practical solution significant challenge — procedural untapped Our could enable systematic analysis large-scale outcomes, reduce burden cardologists’ trial recruitment, support evidence-based decision-making.

Язык: Английский

Процитировано

0

Large Language Models in Medicine: Clinical Applications, Technical Challenges, and Ethical Considerations DOI Creative Commons
Kyu-Hwan Jung

Healthcare Informatics Research, Год журнала: 2025, Номер 31(2), С. 114 - 124

Опубликована: Апрель 30, 2025

This study presents a comprehensive review of the clinical applications, technical challenges, and ethical considerations associated with using large language models (LLMs) in medicine. A literature survey peer-reviewed articles, reports, expert commentary from relevant medical artificial intelligence journals was conducted. Key application areas, limitations (e.g., accuracy, validation, transparency), issues bias, safety, accountability, privacy) were identified analyzed. LLMs have potential documentation assistance, decision support, patient communication, workflow optimization. The level supporting evidence varies; support applications are relatively mature, whereas autonomous diagnostics continue to face notable regarding accuracy validation. challenges include model hallucination, lack robust integration issues, limited transparency. Ethical concerns involve algorithmic bias risking health inequities, threats safety inaccuracies, unclear data privacy, impacts on clinician-patient interactions. possess transformative for medicine, particularly by augmenting clinician capabilities. However, substantial hurdles necessitate rigorous research, clearly defined guidelines, human oversight. Existing supports an assistive rather than role, mandating careful, evidence-based that prioritizes equity.

Язык: Английский

Процитировано

0

Consensus on the Potential of Large Language Models in Healthcare: Insights from a Delphi Survey in Korea DOI Creative Commons
Ah-Ram Sul, Seihee Kim

Healthcare Informatics Research, Год журнала: 2025, Номер 31(2), С. 146 - 155

Опубликована: Апрель 30, 2025

Given the rapidly growing expectations for large language models (LLMs) in healthcare, this study systematically collected perspectives from Korean experts on potential benefits and risks of LLMs, aiming to promote their safe effective utilization. A web-based mini-Delphi survey was conducted August 27 October 14, 2024, with 20 selected panelists. The expert questionnaire comprised 84 judgment items across five domains: applications, benefits, risks, reliability requirements, usage. These were developed through a literature review consultation. Participants rated agreement or perceived importance 5-point scale. Items meeting predefined thresholds (content validity ratio ≥0.49, degree convergence ≤0.50, consensus ≥0.75) prioritized. Seventeen participants (85%) responded first round, 16 (80%) completed second round. Consensus achieved several requirements use LLMs healthcare. However, significant heterogeneity found regarding perceptions associated criteria usage LLMs. Of total items, 52 met statistical validity, confirming diversity opinions. Experts reached certain aspects LLM utilization Nonetheless, notable differences remained concerning implementation, highlighting need further investigation. This provides foundational insights guide future research inform policy development responsible introduction into healthcare field.

Язык: Английский

Процитировано

0

Knowledge and Awareness of Generative Artificial Intelligence Use in Medicine Among International Stakeholders: A Cross‐Sectional Study DOI
Xufei Luo, Bingyi Wang, Yule Li

и другие.

Journal of Evidence-Based Medicine, Год журнала: 2025, Номер 18(2)

Опубликована: Июнь 1, 2025

ABSTRACT Objective To assess the knowledge, attitudes, and practices (KAP) of medical stakeholders regarding use generative artificial intelligence (GAI) tools. Methods A cross‐sectional survey was conducted among in medicine. Participants included researchers, clinicians, journal editors with varying degrees familiarity GAI The questionnaire comprised 40 questions covering four main dimensions: basic information, related to Descriptive analysis, Pearson's correlation, multivariable regression were used analyze data. Results overall awareness rate tools 93.3%. demonstrated moderate knowledge (mean score 17.71 ± 5.56), positive attitudes 73.32 15.83), reasonable 40.70 12.86). Factors influencing education level, geographic region, ( p < 0.05). Attitudes influenced by work experience 0.05), while driven both 0.001). from outside China scored higher all dimensions compared those Additionally, 74.0% participants emphasized importance reporting usage research, 73.9% advocated for naming specific tool used. Conclusion findings highlight a growing generally attitude toward stakeholders, alongside recognition their ethical implications necessity standardized practices. Targeted training development clear guidelines are recommended enhance effective research practice.

Язык: Английский

Процитировано

0