Emergency Medicine Assistants in the Field of Toxicology, Comparison of ChatGPT-3.5 and GEMINI Artificial Intelligence Systems DOI Creative Commons
Hatice Aslı Bedel, Cihan Bedel, Fatih Selvi

et al.

Acta medica Lituanica, Journal Year: 2024, Volume and Issue: 31(2), P. 294 - 301

Published: Dec. 26, 2024

Artificial intelligence models human thinking and problem-solving abilities, allowing computers to make autonomous decisions. There is a lack of studies demonstrating the clinical utility GPT Gemin in field toxicology, which means their level competence not well understood. This study compares responses given by GPT-3.5 those provided emergency medicine residents. prospective was focused on toxicology utilized widely recognized educational resource 'Tintinalli Emergency Medicine: A Comprehensive Study Guide' for Medicine. set twenty questions, each with five options, devised test knowledge toxicological data as defined book. These questions were then used train ChatGPT (Generative Pre-trained Transformer 3.5) OpenAI Gemini Google AI clinic. The resulting answers meticulously analyzed. 28 physicians, 35.7% whom women, included our study. comparison made between physician scores. While significant difference found (F=2.368 p<0.001), no two groups post-hoc Tukey test. mean score 9.9±0.71, 11.30±1.17 and, physicians' 9.82±3.70 (Figure 1). It clear that respond similarly topics just resident physicians do.

Language: Английский

Programming Chatbots Using Natural Language: Generating Cervical Spine MRI Impressions DOI Open Access
Ramin Javan, Theodore Kim,

Ahmed Abdelmonem

et al.

Cureus, Journal Year: 2024, Volume and Issue: unknown

Published: Sept. 14, 2024

The utility of machine learning, specifically large language models (LLMs), in the medical field has gained considerable attention. However, there is a scarcity studies that focus on application LLMs generating custom subspecialty radiology impressions. primary objective this study to evaluate and compare performance multiple specialized, accurate, clinically useful impressions for degenerative cervical spine MRI reports.

Language: Английский

Citations

1

The performance of ChatGPT versus neurosurgery residents in neurosurgical board examination-like questions: a systematic review and meta-analysis DOI

Edgar Dominic A. Bongco,

Sean Kendrich N.,

Mary Angeline Luz U. Hernandez

et al.

Neurosurgical Review, Journal Year: 2024, Volume and Issue: 47(1)

Published: Dec. 6, 2024

Language: Английский

Citations

0

Deep-Learning-Based Radiomics to Predict Surgical Risk Factors for Lumbar Disc Herniation in Young Patients: A Multicenter Study DOI Creative Commons
Zheng Fan, Tong Wu, Yang Wang

et al.

Journal of Multidisciplinary Healthcare, Journal Year: 2024, Volume and Issue: Volume 17, P. 5831 - 5851

Published: Dec. 1, 2024

The aim of this study is to develop and validate a deep-learning radiomics model for predicting surgical risk factors lumbar disc herniation (LDH) in young patients assist clinicians identifying candidates, alleviating symptoms, improving prognosis.

Language: Английский

Citations

0

The Role of Generative AI in Empowering Generation Z in Higher Education DOI
Rastislav Zábojník,

Viktor Hromada

Deleted Journal, Journal Year: 2024, Volume and Issue: unknown, P. 758 - 776

Published: Jan. 1, 2024

Generative artificial intelligence (AI) is increasingly integrated into higher education, offering advanced opportunities for personalized learning and tailored approaches that address students’ specific needs. This study examines the influence of generative AI on education Generation Z, emphasizing its role in fostering critical thinking, psychological implications, potential to transform traditional pedagogical methods. Employing a methodological framework systematic literature review analysis national international studies, findings reveal can significantly enhance student motivation engagement. Personalized content delivery facilitates supports successful completion complex academic tasks, promoting development analytical metacognitive skills necessary navigating intricate information landscapes. However, over-reliance risks diminishing independent problem-solving abilities, underscoring need balanced integration this technology educational practices. The further highlights challenges such as digital overload, which may adversely affect mental health, reduced social competence due decreased human interaction. In response, strategic implementation recommended, designed optimize benefits while mitigating emotional development. should be leveraged supportive tool experience, with strong focus ethical standards holistic growth. Its effective technical, cognitive, social, dimensions learning, contributing sustainable students era.

Language: Английский

Citations

0

Emergency Medicine Assistants in the Field of Toxicology, Comparison of ChatGPT-3.5 and GEMINI Artificial Intelligence Systems DOI Creative Commons
Hatice Aslı Bedel, Cihan Bedel, Fatih Selvi

et al.

Acta medica Lituanica, Journal Year: 2024, Volume and Issue: 31(2), P. 294 - 301

Published: Dec. 26, 2024

Artificial intelligence models human thinking and problem-solving abilities, allowing computers to make autonomous decisions. There is a lack of studies demonstrating the clinical utility GPT Gemin in field toxicology, which means their level competence not well understood. This study compares responses given by GPT-3.5 those provided emergency medicine residents. prospective was focused on toxicology utilized widely recognized educational resource 'Tintinalli Emergency Medicine: A Comprehensive Study Guide' for Medicine. set twenty questions, each with five options, devised test knowledge toxicological data as defined book. These questions were then used train ChatGPT (Generative Pre-trained Transformer 3.5) OpenAI Gemini Google AI clinic. The resulting answers meticulously analyzed. 28 physicians, 35.7% whom women, included our study. comparison made between physician scores. While significant difference found (F=2.368 p<0.001), no two groups post-hoc Tukey test. mean score 9.9±0.71, 11.30±1.17 and, physicians' 9.82±3.70 (Figure 1). It clear that respond similarly topics just resident physicians do.

Language: Английский

Citations

0