IgMin Research,
Journal Year:
2024,
Volume and Issue:
2(11), P. 929 - 937
Published: Nov. 21, 2024
Artificial
intelligence
(AI)
is
the
simulation
of
human
and
benchmarks
in
Physical
Therapy
(PT).
Therefore,
updated
knowledge
derived
from
large
databases
highly
engaging.
Data
Mining
(DM)
analysis
a
big
database
related
to
“AI”
“PT”
was
aim
for
co-occurrence
words,
network
clusters,
trends
under
Knowledge
Discovery
Databases
(KDD).
The
terms
were
cited
SCOPUS.
co-occurrence,
clustering,
trend
computer-analyzed
with
Bibliometric
tool.
Between
1993
2024,
174
documents
published,
revealing
most
frequently
used
AI,
human,
PT,
physical
modalities,
machine
learning,
treatment,
deep
patient
rehabilitation,
robotics,
virtual
reality,
algorithms,
telerehabilitation,
ergonomics,
exercise,
quality
life,
other
topics.
Five
clusters
discovered
as;
(1)
decision
support
systems,
health
care,
human-computer
interaction,
intelligent
robots,
learning
neuromuscular,
stroke,
etc.,
respectively,
(2)
aged,
biomechanics,
exercise
therapy,
female,
humans,
middle-aged,
PT
treatment
outcome,
(3)
diagnosis,
(4)
review
systematic
review,
(5)
clinical
practice.
From
2008
emerged
fields
computer-assisted
planning,
classification,
equipment
design,
signal
processing,
practice,
etc.
Discovered
AI
different
use
Cureus,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Oct. 5, 2024
This
study
evaluates
the
accuracy
of
two
AI
language
models,
ChatGPT
4.0
and
Google
Gemini
(as
August
2024),
in
answering
a
set
79
text-based
pediatric
radiology
questions
from
"Pediatric
Imaging:
A
Core
Review."
Accurate
interpretation
text
images
is
critical
radiology,
making
tools
valuable
medical
education.
British Journal of Biomedical Science,
Journal Year:
2025,
Volume and Issue:
81
Published: Feb. 5, 2025
The
emergence
of
ChatGPT
and
similar
new
Generative
AI
tools
has
created
concern
about
the
validity
many
current
assessment
methods
in
higher
education,
since
learners
might
use
these
to
complete
those
assessments.
Here
we
review
evidence
on
this
issue
show
that
for
assessments
like
essays
multiple-choice
exams,
concerns
are
legitimate:
can
them
a
very
high
standard,
quickly
cheaply.
We
consider
how
assess
learning
alternative
ways,
importance
retaining
foundational
core
knowledge.
This
is
considered
from
perspective
professional
regulations
covering
registration
Biomedical
Scientists
their
Health
Care
Professions
Council
(HCPC)
approved
education
providers,
although
it
should
be
broadly
relevant
across
education.
Scientific Reports,
Journal Year:
2025,
Volume and Issue:
15(1)
Published: March 17, 2025
Abstract
This
study
evaluates
the
effectiveness
of
three
leading
generative
AI
tools-ChatGPT,
Gemini,
and
Copilot-in
undergraduate
mechanical
engineering
education
using
a
mixed-methods
approach.
The
performance
these
tools
was
assessed
on
800
questions
spanning
seven
core
subjects,
covering
multiple-choice,
numerical,
theory-based
formats.
While
all
demonstrated
strong
in
questions,
they
struggled
with
numerical
problem-solving,
particularly
areas
requiring
deep
conceptual
understanding
complex
calculations.
Among
them,
Copilot
achieved
highest
accuracy
(60.38%),
followed
by
Gemini
(57.13%)
ChatGPT
(46.63%).
To
complement
findings,
survey
172
students
interviews
20
participants
provided
insights
into
user
experiences,
challenges,
perceptions
academic
settings.
Thematic
analysis
revealed
concerns
regarding
AI’s
reliability
tasks
its
potential
impact
students’
problem-solving
abilities.
Based
results,
this
offers
strategic
recommendations
for
integrating
curricula,
ensuring
responsible
use
to
enhance
learning
without
fostering
dependency.
Additionally,
we
propose
instructional
strategies
help
educators
adapt
assessment
methods
era
AI-assisted
learning.
These
findings
contribute
broader
discussion
role
implications
future
methodologies.