SSRN Electronic Journal,
Год журнала:
2023,
Номер
unknown
Опубликована: Янв. 1, 2023
The
advent
of
advanced
generative
AI
marks
a
pivotal
moment
in
psychological
science
and
academia
at
large.
This
commentary
advocates
for
leading
organizations,
such
as
the
American
Psychological
Association
(APA)
Science
(APS),
to
spearhead
comprehensive
ethical
guidelines
use
research
publishing.
We
argue
that
should
be
permitted—and
indeed
encouraged—to
augment
human
knowledge
generation
dissemination,
serving
scholarly
aid.
Properly
regulated,
can
enhance
productivity,
creativity,
discovery
without
compromising
rigor
or
integrity.
However,
key
issues
attribution,
transparency,
reproducibility,
preventing
misuse
necessitate
clear
standards
oversight.
examine
appropriate
attribution
contributions
authorship,
effective
documentation
practices
ensure
safeguards
against
potential
misuse.
call
nuanced
guidelines—not
blanket
prohibition—to
responsibly
integrate
into
research,
puts
forth
specific
transparency
reproducibility.
Journal of Research on Technology in Education,
Год журнала:
2025,
Номер
unknown, С. 1 - 19
Опубликована: Янв. 9, 2025
Ongoing
advancements
in
generative
AI
(GenAI)
have
boosted
the
potential
of
applying
long-standing
"learning-by-teaching"
practices
form
a
teachable
agent
(TA).
Despite
recognized
roles
and
opportunities
TAs,
less
is
known
about
how
GenAI
could
create
synergy
or
introduce
challenges
TAs
students
perceived
application
TAs.
This
study
explored
middle
school
students'
roles,
benefits,
GenAI-powered
an
authentic
mathematics
classroom.
Through
classroom
observation,
focus-group
interviews,
open-ended
surveys
108
sixth-grade
students,
we
found
that
expected
TA
to
serve
as
learning
companion,
facilitator,
collaborative
problem-solver.
Students
also
expressed
benefits
provides
implications
for
design
educational
AI-assisted
instruction.
British Journal of Educational Technology,
Год журнала:
2024,
Номер
55(5), С. 1974 - 1981
Опубликована: Июль 10, 2024
Abstract
A
key
goal
of
educational
institutions
around
the
world
is
to
provide
inclusive,
equitable
quality
education
and
lifelong
learning
opportunities
for
all
learners.
Achieving
this
requires
contextualized
approaches
accommodate
diverse
global
values
promote
that
best
meet
needs
goals
learners
as
individuals
members
different
communities.
Advances
in
analytics
(LA),
natural
language
processes
(NLP),
artificial
intelligence
(AI),
especially
generative
AI
technologies,
offer
potential
aid
decision
making
by
supporting
analytic
insights
personalized
recommendations.
However,
these
technologies
also
raise
serious
risks
reinforcing
or
exacerbating
existing
inequalities;
dangers
arise
from
multiple
factors
including
biases
represented
training
datasets,
technologies'
abilities
take
autonomous
decisions,
tool
development
do
not
centre
concerns
historically
marginalized
groups.
To
ensure
Educational
Decision
Support
Systems
(EDSS),
particularly
AI‐powered
ones,
are
equipped
equity,
they
must
be
created
evaluated
holistically,
considering
their
both
targeted
systemic
impacts
on
learners,
Adopting
a
socio‐technical
cultural
perspective
crucial
designing,
deploying,
evaluating
AI‐EDSS
truly
advance
equity
inclusion.
This
editorial
introduces
contributions
five
papers
special
section
advancing
inclusion
practices
with
AI‐EDSS.
These
focus
(i)
review
large
models
(LLMs)
applications
offers
practical
guidelines
evaluation
(ii)
techniques
mitigate
disparities
across
countries
languages
LLMs
representation
educationally
relevant
knowledge,
(iii)
implementing
intersectionality‐aware
machine
education,
(iv)
introducing
LA
dashboard
aims
institutional
equality,
diversity,
inclusion,
(v)
vulnerable
student
digital
well‐being
Together,
underscore
importance
an
interdisciplinary
approach
developing
utilizing
only
foster
more
inclusive
landscape
worldwide
but
reveal
critical
need
broader
contextualization
incorporates
questions
what
kinds
decisions
being
used
support,
purposes,
whose
prioritized
process.
BioMedInformatics,
Год журнала:
2025,
Номер
5(1), С. 15 - 15
Опубликована: Март 11, 2025
Large
language
models
(LLMs)
have
emerged
as
powerful
tools
for
(semi-)automating
the
initial
screening
of
abstracts
in
systematic
reviews,
offering
potential
to
significantly
reduce
manual
burden
on
research
teams.
This
paper
provides
a
broad
overview
prompt
engineering
principles
and
highlights
how
traditional
PICO
(Population,
Intervention,
Comparison,
Outcome)
criteria
can
be
converted
into
actionable
instructions
LLMs.
We
analyze
trade-offs
between
“soft”
prompts,
which
maximize
recall
by
accepting
articles
unless
they
explicitly
fail
an
inclusion
requirement,
“strict”
demand
explicit
evidence
every
criterion.
Using
periodontics
case
study,
we
illustrate
design
affects
recall,
precision,
overall
efficiency
discuss
metrics
(accuracy,
F1
score)
evaluate
performance.
also
examine
common
pitfalls,
such
overly
lengthy
prompts
or
ambiguous
instructions,
underscore
continuing
need
expert
oversight
mitigate
hallucinations
biases
inherent
LLM
outputs.
Finally,
explore
emerging
trends,
including
multi-stage
pipelines
fine-tuning,
while
noting
ethical
considerations
related
data
privacy
transparency.
By
applying
rigorous
evaluation,
researchers
optimize
LLM-based
processes,
allowing
faster
more
comprehensive
synthesis
across
biomedical
disciplines.
Research Square (Research Square),
Год журнала:
2025,
Номер
unknown
Опубликована: Март 26, 2025
Abstract
To
assist
Chinese
language
teachers
in
making
evidence-based
choices
of
useful
and
user-friendly
domestic
large
models
teaching
research,
the
study
took
132
objective
questions
from
national
college
entrance
examination
papers
2021
to
2023
as
data
set
assess
performance
six
models,
namely
Tongyi
Qianwen,
GLM-4,
KimiChat,
Baichuan,
Wenxin
Yiyan,
Xunfei
Spark,
semantic
understanding.
The
assessment
revealed
that
overall
correct
rates
responses
above
were
70%,
69%,
57%,
55%,
60%,
62%
respectively.
Among
them,
Qianwen
Spark
performed
best
application
questions,
with
74%
each;
GLM-4
ancient
poetry
reading
modern
text
reaching
92%
77%
classical
was
not
ideal.
For
wrongly
answered
test
researchers
corrected
analyzed
answers
using
prompt
strategy.
Finally,
paper
put
forward
several
suggestions
for
promoting
assistance
research.
Information,
Год журнала:
2025,
Номер
16(5), С. 358 - 358
Опубликована: Апрель 29, 2025
Large
language
models
(LLMs)
have
revolutionized
natural
processing
across
diverse
domains,
yet
they
also
raise
critical
fairness
and
ethical
concerns,
particularly
regarding
gender
bias.
In
this
study,
we
conduct
a
systematic,
mathematically
grounded
investigation
of
bias
in
four
leading
LLMs—GPT-4o,
Gemini
1.5
Pro,
Sonnet
3.5,
LLaMA
3.1:8b—by
evaluating
the
distributions
produced
when
generating
“perfect
personas”
for
wide
range
occupational
roles
spanning
healthcare,
engineering,
professional
services.
Leveraging
standardized
prompts,
controlled
experimental
settings,
repeated
trials,
our
methodology
quantifies
against
an
ideal
uniform
distribution
using
rigorous
statistical
measures
information-theoretic
metrics.
Our
results
reveal
marked
discrepancies:
GPT-4o
exhibits
pronounced
segregation,
disproportionately
linking
healthcare
to
female
identities
while
assigning
male
labels
engineering
physically
demanding
positions.
contrast,
3.1:8b
predominantly
favor
assignments,
albeit
with
less
job-specific
precision.
These
findings
demonstrate
how
architectural
decisions,
training
data
composition,
token
embedding
strategies
critically
influence
representation.
The
study
underscores
urgent
need
inclusive
datasets,
advanced
bias-mitigation
techniques,
continuous
model
audits
develop
AI
systems
that
are
not
only
free
from
stereotype
perpetuation
but
actively
promote
equitable
representative
information
processing.