The
rapid
proliferation
and
adoption
of
generative
Artificial
Intelligence
(GenAI)
underscore
its
ease
use.
However,
there
has
been
limited
research
exploring
what
constitutes
proficient
use
GenAI
competencies
underpin
it.
In
this
study,
we
used
semi-structured
interviews
to
explore
how
twenty-five
expert
users
(all
knowledge
workers)
define,
exemplify
explain
proficiency.
A
purposive
sampling
approach
was
adopted
with
the
aim
capturing
input
from
experts
a
range
occupations
sectors
towards
answering
three
questions.
First,
can
identify
characteristics
that
differentiate
(more
effective)
GenAI?
Second,
are
seen
underlie
Third,
benefits
associated
more
tools?
Analysis
descriptions
shared
by
revealed
four
aspects
proficiency:
effective
prompting,
informed
responsible
choices,
diversity
complexity
use,
frequency
addition,
following
themes
emerged
analysis
supporting
GenAI:
literacy,
domain
expertise,
communication
skills,
metacognition
curiosity
inquisitiveness,
flexibility
adaptability,
diligence
(in
some
contexts)
information
technology
skills.
More
have
ranging
improved
productivity,
higher
quality
output
original
work.
By
offering
comprehensive
framework
for
GenAI,
grounded
in
real
world
experience,
study
guides
further
substantiates
continuing
relevance
human
mindsets
when
working
tools.
European Journal of Education,
Journal Year:
2025,
Volume and Issue:
60(1)
Published: Jan. 7, 2025
ABSTRACT
This
study
examines
pre‐service
teachers'
understanding
of
technology
integration
and
the
role
AI
tools
in
shaping
this
perspective.
Open‐ended
responses,
analysed
using
topic
modelling,
reveal
main
themes
views
compare
them
with
topics
generated
by
like
ChatGPT,
Gemini,
Bing
AI.
Key
responses
include
improving
learning
quality,
adapting
to
technology,
integrating
it
into
education.
ChatGPT
highlights
effective
learning,
student
support,
educational
while
Gemini
emphasises
accessibility,
innovative
methods,
AI‐supported
learning.
focuses
on
practical
materials,
digital
experiences,
technological
compatibility.
Coherence
scores
show
moderate
alignment,
achieving
highest
scores,
followed
Gemini.
These
findings
shed
light
perceptions
how
can
influence
these
views,
offering
insights
for
future
policies
practices.
Trends in Higher Education,
Journal Year:
2025,
Volume and Issue:
4(1), P. 6 - 6
Published: Jan. 27, 2025
Most
of
today’s
educators
are
in
no
shortage
digital
and
online
learning
technologies
available
at
their
fingertips,
ranging
from
Learning
Management
Systems
such
as
Canvas,
Blackboard,
or
Moodle,
meeting
tools,
homework,
tutoring
systems,
exam
proctoring
platforms,
computer
simulations,
even
virtual
reality/augmented
reality
technologies.
Furthermore,
with
the
rapid
development
wide
availability
generative
artificial
intelligence
(GenAI)
services
ChatGPT,
we
just
beginning
harnessing
potential
to
transform
higher
education.
Yet,
facing
large
number
options
provided
by
cutting-edge
technology,
an
imminent
question
on
mind
most
is
following:
how
should
I
choose
integrate
them
into
my
teaching
process
so
that
they
would
best
support
student
learning?
We
contemplate
over
these
types
important
timely
questions
share
our
reflections
evidence-based
approaches
tools
using
a
Self-regulated
Engaged
Framework
have
employed
research
physics
education
can
be
valuable
for
other
disciplines.
Región Científica,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Jan. 3, 2025
This
academic
work
explores
the
use
of
generative
AI
through
Chatbot
GPT,
Gemini,
Copilot,
and
Meta
in
teaching
customs
international
law.
analysis
was
carried
out
with
a
particular
focus
on
education
free
trade
agreements
primary
laws
Mexico.
The
study's
main
findings
show
that
Copilot
is
valuable
tool
for
searching
specific
information
articles
trade.
purpose
achieved
by
applying
prompts
to
obtain
content
question.
Likewise,
favorable
results
were
obtained
cases
GPT
AI.
On
other
hand,
Gemini
showed
unfavorable
because
it
only
general
topics
requested
even
provided
erroneous
information.
These
types
tools
allow
students
make
more
efficient
searches
save
time
when
However,
they
can
present
or
force
them
delve
deeper
into
subject.
This
study
examines
the
feasibility
and
potential
advantages
of
using
large
language
models,
in
particular
GPT-4o,
to
perform
partial
credit
grading
numbers
student
written
responses
introductory
level
physics
problems.
Students
were
instructed
write
down
verbal
explanations
their
reasoning
process
when
solving
one
conceptual
two
numerical
calculation
problems
on
exams.
The
then
graded
according
a
three-item
rubric
with
each
item
as
binary
(1
or
0).
We
first
demonstrate
that
machine
GPT-4o
no
examples
reference
answers
can
reliably
agree
human
graders
70%–80%
all
cases,
which
is
equal
higher
than
at
other.
Two
methods
are
essential
for
achieving
this
accuracy:
(i)
Adding
explanation
targets
errors
initial
grading.
(ii)
Running
5
times
taking
most
frequent
outcome.
Next,
we
show
variation
outcomes
across
five
attempts
serve
confidence
index.
index
allows
expert
identify
∼40%
potentially
incorrect
gradings
by
reviewing
just
10%–15%
highest
variation.
Finally,
it
straightforward
use
clear
detailed
Those
be
used
feedback
students,
will
allow
students
understand
grades
raise
different
opinions
necessary.
Almost
messages
generated
rated
three
above
five-point
scale
instructors
who
had
taught
course
multiple
times.
entire
generating
costs
roughly
$5
per
100
answers,
shows
immense
promise
automating
labor-intensive
through
combination
input
supervision.
Published
American
Physical
Society
2025