Generative
Artificial
intelligence
(GenAI)
such
as
ChatGPT
has
elicited
strong
reactions
from
almost
all
stakeholders
across
the
education
system.
Education-oriented
and
academic
social
media
communities
provide
an
important
venue
for
these
to
share
experiences
exchange
ideas
about
GenAI,
which
is
constructive
developing
human-centered
policies.
This
study
examines
early
user
consisting
of
725
Reddit
threads
between
06/2022
05/2023.
Through
natural
language
processing
(NLP)
content
analysis,
we
observe
increasingly
negative
sentiment
in
discussion
identify
six
main
categories
student
faculty
GenAI
education.
These
reflect
concerns
integrity
AI's
impact
on
value
traditional
Our
analysis
also
highlights
additional
workload
imposed
by
new
technologies.
findings
suggest
that
dialogue
community
critical
can
mitigate
sources
tension
students
faculty.
Chemistry Teacher International,
Год журнала:
2024,
Номер
unknown
Опубликована: Окт. 15, 2024
Abstract
This
paper
discusses
the
ethical
considerations
surrounding
generative
artificial
intelligence
(GenAI)
in
chemistry
education,
aiming
to
guide
teachers
toward
responsible
AI
integration.
GenAI,
driven
by
advanced
models
like
Large
Language
Models,
has
shown
substantial
potential
generating
educational
content.
However,
this
technology’s
rapid
rise
brought
forth
concerns
regarding
general
and
use
that
require
careful
attention
from
educators.
The
UNESCO
framework
on
GenAI
education
provides
a
comprehensive
controversies
around
considerations,
emphasizing
human
agency,
inclusion,
equity,
cultural
diversity.
Ethical
issues
include
digital
poverty,
lack
of
national
regulatory
adaptation,
content
without
consent,
unexplainable
used
generate
outputs,
AI-generated
polluting
internet,
understanding
real
world,
reducing
diversity
opinions,
further
marginalizing
already
marginalized
voices
deep
fakes.
delves
into
these
eight
controversies,
presenting
relevant
examples
stress
need
evaluate
critically.
emphasizes
importance
relating
teachers’
pedagogical
knowledge
argues
usage
must
integrate
insights
prevent
propagation
biases
inaccuracies.
conclusion
stresses
necessity
for
teacher
training
effectively
ethically
employ
practices.
Applied Sciences,
Год журнала:
2025,
Номер
15(2), С. 631 - 631
Опубликована: Янв. 10, 2025
Qualitative
data
analysis
(QDA)
tools
are
essential
for
extracting
insights
from
complex
datasets.
This
study
investigates
researchers’
perceptions
of
the
usability,
user
experience
(UX),
mental
workload,
trust,
task
complexity,
and
emotional
impact
three
tools:
Taguette
1.4.1
(a
traditional
QDA
tool),
ChatGPT
(GPT-4,
December
2023
version),
Gemini
(formerly
Google
Bard,
version).
Participants
(N
=
85),
Master’s
students
Faculty
Electrical
Engineering
Computer
Science
with
prior
in
UX
evaluations
familiarity
AI-based
chatbots,
performed
sentiment
annotation
tasks
using
these
tools,
enabling
a
comparative
evaluation.
The
results
show
that
AI
were
associated
lower
cognitive
effort
more
positive
responses
compared
to
Taguette,
which
caused
higher
frustration
especially
during
cognitively
demanding
tasks.
Among
achieved
highest
usability
score
(SUS
79.03)
was
rated
positively
engagement.
Trust
levels
varied,
preferred
accuracy
confidence.
Despite
differences,
all
consistently
identifying
qualitative
patterns.
These
findings
suggest
AI-driven
can
enhance
experiences
while
emphasizing
need
align
tool
selection
specific
preferences.
Computers,
Год журнала:
2025,
Номер
14(2), С. 52 - 52
Опубликована: Фев. 5, 2025
The
performance
of
Large
Language
Models,
such
as
ChatGPT,
generally
increases
with
every
new
model
release.
In
this
study,
we
investigated
to
what
degree
different
GPT
models
were
able
solve
the
exams
three
undergraduate
courses
on
warehousing.
We
contribute
discussion
ChatGPT’s
existing
logistics
knowledge,
particularly
in
field
Both
free
version
(GPT-4o
mini)
and
premium
(GPT-4o)
completed
warehousing
using
prompting
techniques
(with
without
role
assignments
experts
or
students).
o1-preview
was
also
used
(without
a
assignment)
for
six
runs.
tests
repeated
times.
A
total
60
conducted
compared
in-class
results
students.
show
that
passed
46
tests.
best
run
solved
93%
exam
correctly.
Compared
students
from
respective
semester,
ChatGPT
outperformed
one
exam.
other
two
exams,
performed
better
average
than
ChatGPT.
BACKGROUND
Polycystic
ovary
syndrome
(PCOS)
is
a
prevalent
condition
requiring
effective
patient
education,
particularly
in
China.
Large
language
models
(LLMs)
present
promising
avenue
for
this.
This
two-phase
study
evaluates
six
LLMs
educating
Chinese
patients
about
PCOS.
It
assesses
their
capabilities
answering
questions,
interpreting
ultrasound
images,
and
providing
instructions
within
real-world
clinical
setting
OBJECTIVE
systematically
evaluated
gigantic
models—Gemini
2.0
Pro,
OpenAI
o1,
ChatGPT-4o,
ChatGPT-4,
ERINE
4.0,
GLM-4—for
use
gynecological
medicine.
assessed
performance
several
areas:
questions
from
the
Gynecology
Qualification
Examination,
understanding
coping
with
polycystic
cases,
writing
instructions,
helping
to
solve
problems.
METHODS
A
two-step
evaluation
method
was
used.
Primarily,
they
tested
frameworks
on
136
exam
36
images.
They
then
compared
results
those
of
medical
students
residents.
Six
gynecologists
framework's
responses
23
PCOS-related
using
Likert
scale,
readability
tool
used
review
content
objectively.
In
following
process,
40
PCOS
two
central
systems,
Gemini
Pro
o1.
them
terms
satisfaction,
text
readability,
professional
evaluation.
RESULTS
During
initial
phase
testing,
o1
demonstrated
impressive
accuracy
specialist
achieving
rates
93.63%
92.40%,
respectively.
Additionally,
image
diagnostic
tasks
noteworthy,
an
69.44%
reaching
53.70%.
Regarding
response
significantly
outperformed
other
accuracy,
completeness,
practicality,
safety.
However,
its
were
notably
more
complex
(average
score
13.98,
p
=
0.003).
The
second-phase
revealed
that
excelled
(patient
rating
3.45,
<
0.01;
physician
3.35,
0.03),
surpassing
2.65,
2.90).
slightly
lagged
behind
completeness
(3.05
vs.
3.50,
0.04).
CONCLUSIONS
reveals
large
have
considerable
potential
address
issues
faced
by
PCOS,
which
are
capable
accurate
comprehensive
responses.
Nevertheless,
it
still
needs
be
strengthened
so
can
balance
clarity
comprehensiveness.
addition,
big
besides
analyzing
especially
ability
handle
regulation
categories,
improved
meet
practice.
CLINICALTRIAL
None
Journal of Baltic Science Education,
Год журнала:
2025,
Номер
24(1), С. 187 - 207
Опубликована: Фев. 25, 2025
As
the
development
and
application
of
large
language
models
(LLMs)
in
physics
education
progress,
well-known
AI-based
chatbot
ChatGPT4
has
presented
numerous
opportunities
for
educational
assessment.
Investigating
potential
AI
tools
practical
assessment
carries
profound
significance.
This
study
explored
comparative
performance
human
graders
scoring
upper-secondary
essay
questions.
Eighty
students’
responses
to
two
questions
were
evaluated
by
30
pre-service
teachers
ChatGPT4.
The
analysis
highlighted
their
consistency
accuracy,
including
intra-human
comparisons,
GPT
grading
at
different
times,
human-GPT
variations
across
cognitive
categories.
intraclass
correlation
coefficient
(ICC)
was
used
assess
consistency,
while
accuracy
illustrated
through
Pearson
with
expert
scores.
findings
reveal
that
demonstrated
higher
scoring,
scorers
showed
superior
most
instances.
These
results
underscore
strengths
limitations
using
LLMs
assessments.
high
can
be
valuable
standardizing
assessments
diverse
contexts,
nuanced
understanding
flexibility
are
irreplaceable
handling
complex
subjective
evaluations.
Keywords:
Physics
question
assessment,
grader,
Human
graders.
E-Journal of Humanities Arts and Social Sciences,
Год журнала:
2025,
Номер
unknown, С. 362 - 376
Опубликована: Март 28, 2025
The
purpose
of
this
paper
was
to
investigate
the
facilitation
experiences
e-tutors
who
were
assigned
teach
modules
through
a
Learning
Management
System
(LMS).
article
employed
an
interpretivism
quantitative
survey
method
for
articulate
their
impressions
about
how
LMS
leverages
them
become
experts
in
modules.
Constructivism
learning
theory
as
lens
paper.
Quantitative
analysis
used
collect
accounts
from
five
and
arranged
presented
tables.
e-tutor
samples
based
on
criteria
set
during
appointment
by
case
institution.
It
found
that
cannot
facilitate
with
LMS.
is
recommended
should
be
trained
able
promote
teaching
using
different
module
courses.
study
contributes
growing
literature
ODeL
e-tutoring
model
student
support.
Keywords:
Open
Distance
e-Learning,
System,
Language and Semiotic Studies,
Год журнала:
2025,
Номер
unknown
Опубликована: Апрель 4, 2025
Abstract
AI-mediated
academic
writing
calls
for
new
pedagogical
approaches
to
the
application
of
prompt
engineering
courses.
Whereas
previous
studies
mainly
inform
students
techniques,
little
is
known
about
how
functions
from
perspective
meaning
negotiation
between
human
and
generative
AI.
This
paper
explores
integration
Pan-indexical
process
linguistic
signs
into
a
prompt-based
teaching
model
(PBTM),
emphasizing
its
potential
facilitate
in
during
early
stage
writing.
The
PBTM
consists
four
key
components:
encyclopedic
knowledge,
contextual
information,
evaluative
critical
thinking,
iterative
design.
lies
idea
development
organized
around
major
steps:
crafting
initial
prompt;
refining
with
information;
engaging
thinking;
progression
toward
desired
response.
suggests
that
linguistics
can
be
employed
enhance
students’
ability
optimization
prompts
through
deeper
understanding
AI
support
their
process.