Knowledge Discovery on Artificial Intelligence and Physical Therapy: Document Mining Analysis DOI

Leelarungrayub Jirakrit,

Yankai Araya,

Thipcharoen Supattanawaree

et al.

IgMin Research, Journal Year: 2024, Volume and Issue: 2(11), P. 929 - 937

Published: Nov. 21, 2024

Artificial intelligence (AI) is the simulation of human and benchmarks in Physical Therapy (PT). Therefore, updated knowledge derived from large databases highly engaging. Data Mining (DM) analysis a big database related to “AI” “PT” was aim for co-occurrence words, network clusters, trends under Knowledge Discovery Databases (KDD). The terms were cited SCOPUS. co-occurrence, clustering, trend computer-analyzed with Bibliometric tool. Between 1993 2024, 174 documents published, revealing most frequently used AI, human, PT, physical modalities, machine learning, treatment, deep patient rehabilitation, robotics, virtual reality, algorithms, telerehabilitation, ergonomics, exercise, quality life, other topics. Five clusters discovered as; (1) decision support systems, health care, human-computer interaction, intelligent robots, learning neuromuscular, stroke, etc., respectively, (2) aged, biomechanics, exercise therapy, female, humans, middle-aged, PT treatment outcome, (3) diagnosis, (4) review systematic review, (5) clinical practice. From 2008 emerged fields computer-assisted planning, classification, equipment design, signal processing, practice, etc. Discovered AI different use

Language: Английский

Encouragement vs. liability: How prompt engineering influences ChatGPT-4's radiology exam performance DOI
Daniel Nguyen, Allison M. MacKenzie, Young H. Kim

et al.

Clinical Imaging, Journal Year: 2024, Volume and Issue: 115, P. 110276 - 110276

Published: Sept. 6, 2024

Language: Английский

Citations

5

Evaluating performance of ChatGPT on MKSAP cardiology board review questions DOI
Stefan Milutinovic, Marija Petrović,

Dustin Begosh-Mayne

et al.

International Journal of Cardiology, Journal Year: 2024, Volume and Issue: unknown, P. 132576 - 132576

Published: Sept. 1, 2024

Language: Английский

Citations

4

Comparative Accuracy of ChatGPT 4.0 and Google Gemini in Answering Pediatric Radiology Text-Based Questions DOI Open Access

Mohammed Abdul Sami,

Abdul Samad,

Keyur Parekh

et al.

Cureus, Journal Year: 2024, Volume and Issue: unknown

Published: Oct. 5, 2024

This study evaluates the accuracy of two AI language models, ChatGPT 4.0 and Google Gemini (as August 2024), in answering a set 79 text-based pediatric radiology questions from "Pediatric Imaging: A Core Review." Accurate interpretation text images is critical radiology, making tools valuable medical education.

Language: Английский

Citations

4

Comparative accuracy of artificial intelligence chatbots in pulpal and periradicular diagnosis: A cross-sectional study DOI
João Daniel Mendonça de Moura, Carlos Eduardo Fontana,

Victor Hugo de Souza Lima

et al.

Computers in Biology and Medicine, Journal Year: 2024, Volume and Issue: 183, P. 109332 - 109332

Published: Oct. 31, 2024

Language: Английский

Citations

4

Artificial intelligence chatbots in musculoskeletal rehabilitation: change is knocking at the door DOI

Giacomo Rossettini,

Alvisa Palese,

Federica Corradi

et al.

Minerva Orthopedics, Journal Year: 2025, Volume and Issue: 75(6)

Published: Jan. 1, 2025

Language: Английский

Citations

0

Evaluating AI-Generated Responses from Different Chatbots to Soil Science-Related Questions DOI Creative Commons
Javad Khanifar

Soil Advances, Journal Year: 2025, Volume and Issue: 3, P. 100034 - 100034

Published: Jan. 31, 2025

Language: Английский

Citations

0

Education and Training Assessment and Artificial Intelligence. A Pragmatic Guide for Educators DOI Creative Commons
Philip M. Newton,

Sue Jones

British Journal of Biomedical Science, Journal Year: 2025, Volume and Issue: 81

Published: Feb. 5, 2025

The emergence of ChatGPT and similar new Generative AI tools has created concern about the validity many current assessment methods in higher education, since learners might use these to complete those assessments. Here we review evidence on this issue show that for assessments like essays multiple-choice exams, concerns are legitimate: can them a very high standard, quickly cheaply. We consider how assess learning alternative ways, importance retaining foundational core knowledge. This is considered from perspective professional regulations covering registration Biomedical Scientists their Health Care Professions Council (HCPC) approved education providers, although it should be broadly relevant across education.

Language: Английский

Citations

0

Comparison of ChatGPT-4o, Google Gemini 1.5 Pro, Microsoft Copilot Pro, and Ophthalmologists in the management of uveitis and ocular inflammation: A comparative study of large language models DOI

Senol Demir

Journal Français d Ophtalmologie, Journal Year: 2025, Volume and Issue: 48(4), P. 104468 - 104468

Published: March 13, 2025

Language: Английский

Citations

0

The role of generative AI tools in shaping mechanical engineering education from an undergraduate perspective DOI Creative Commons
Harshal D. Akolekar,

Piyush Jhamnani,

Vikas Kumar

et al.

Scientific Reports, Journal Year: 2025, Volume and Issue: 15(1)

Published: March 17, 2025

Abstract This study evaluates the effectiveness of three leading generative AI tools-ChatGPT, Gemini, and Copilot-in undergraduate mechanical engineering education using a mixed-methods approach. The performance these tools was assessed on 800 questions spanning seven core subjects, covering multiple-choice, numerical, theory-based formats. While all demonstrated strong in questions, they struggled with numerical problem-solving, particularly areas requiring deep conceptual understanding complex calculations. Among them, Copilot achieved highest accuracy (60.38%), followed by Gemini (57.13%) ChatGPT (46.63%). To complement findings, survey 172 students interviews 20 participants provided insights into user experiences, challenges, perceptions academic settings. Thematic analysis revealed concerns regarding AI’s reliability tasks its potential impact students’ problem-solving abilities. Based results, this offers strategic recommendations for integrating curricula, ensuring responsible use to enhance learning without fostering dependency. Additionally, we propose instructional strategies help educators adapt assessment methods era AI-assisted learning. These findings contribute broader discussion role implications future methodologies.

Language: Английский

Citations

0

Fine-Tuning AI Models for Enhanced Consistency and Precision in Chemistry Educational Assessments DOI Creative Commons
Sri Yamtinah,

Antuni Wiyarsi,

Hayuni Retno Widarti

et al.

Computers and Education Artificial Intelligence, Journal Year: 2025, Volume and Issue: unknown, P. 100399 - 100399

Published: March 1, 2025

Language: Английский

Citations

0