Batı anadolu eğitim bilimleri dergisi,
Journal Year:
2025,
Volume and Issue:
16(1), P. 418 - 434
Published: Feb. 10, 2025
Hayatın
birçok
alanında
yer
alan
Yapay
Zekâ
(YZ)
teknolojisi,
eğitimde
de
edinmiştir.
Eğitim
sisteminin
bir
parçası
olan
öğrencilerin
YZ
teknolojisine
ait
okuryazarlığını
tespit
ederek
teşvik
etmek
önemlidir.
Bunun
için
ise
etkili
şekilde
ölçmeye
yardımcı
ölçme
araçlarına
ihtiyaç
vardır.
Bu
çalışma
ile
geçerli
ve
güvenilir
Okuryazarlık
Ölçeği
(YZOÖ)
geliştirmek
amaçlanmıştır.
Çalışmanın
katılımcıları
Türkiye’nin
iki
farklı
ilinde
öğrenim
gören
ortaokul
öğrencilerinden
oluşmaktadır.
Ölçek,
açımlayıcı
doğrulayıcı
analizler
(AFA
DFA)
yoluyla
test
edilmiştir.
AFA
sonuçları,
ölçeğin
Bilme-Anlama,
Uygulama-Değerlendirme
Etik
olmak
üzere
üç
faktörden
15
maddeden
oluştuğunu
ortaya
koymuştur.
Toplam
varyansın
%60.92’sini
açıklayan
bu
yapı,
birinci
ikinci
düzey
çok
faktörlü
model
doğrulanmıştır.
Doğrulanmış
YZOÖ’nün
alt
boyutlarının
Cronbach
alfa
değerleri
tekrar
sonuçları
yüksek
güvenirlikte
olduğunu
göstermiştir.
Geliştirilen
ölçek
Türkiye’de
literatüründeki
önemli
boşluğu
doldurarak
ölçmek
araştırmacılara
doğrulanmış
araç
sunmaktadır.
Journal of Educational Computing Research,
Journal Year:
2024,
Volume and Issue:
62(7), P. 1675 - 1704
Published: July 18, 2024
The
rapid
evolution
of
AI
technologies
has
reshaped
our
daily
lives.
As
systems
become
increasingly
prevalent,
literacy,
the
ability
to
comprehend
and
engage
with
these
technologies,
becomes
paramount
in
modern
society.
However,
existing
research
yet
establish
a
comprehensive
framework
for
literacy.
This
study
aims
fill
this
gap
by
developing
holistic
literacy
scale.
Three
levels
dimensions
are
considered:
individual,
interactive,
sociocultural.
scale
includes
cognitive,
behavioral,
normative
competencies.
After
rigorous
reliability
validity
assessments,
final
comprises
six
dimensions:
features,
processing,
algorithm
influences,
user
efficacy,
ethical
consideration,
threat
appraisal.
Detailed
development,
validation,
dimension-specific
items
thoroughly
explained.
equips
individuals
competencies
needed
navigate
critically
today’s
multifaceted
landscape.
Education Sciences,
Journal Year:
2023,
Volume and Issue:
13(10), P. 978 - 978
Published: Sept. 26, 2023
A
growing
number
of
courses
seek
to
increase
the
basic
artificial-intelligence
skills
(“AI
literacy”)
their
participants.
At
this
time,
there
is
no
valid
and
reliable
measurement
tool
that
can
be
used
assess
AI-learning
gains.
However,
existence
such
a
would
important
enable
quality
assurance
comparability.
In
study,
validated
AI-literacy-assessment
instrument,
“scale
for
assessment
non-experts’
AI
literacy”
(SNAIL)
was
adapted
evaluate
an
undergraduate
course.
We
investigated
whether
scale
reliably
mediator
variables,
as
attitudes
toward
or
participation
in
other
courses,
had
influence
on
learning
addition
traditional
mean
comparisons
(i.e.,
t-tests),
comparative
self-assessment
(CSA)
gain
calculated,
which
allowed
more
meaningful
literacy.
found
preliminary
evidence
SNAIL
questionnaire
enables
evaluation
particular,
distinctions
among
different
subconstructs
differentiation
constructs,
AI,
seem
possible
with
help
questionnaire.
The
integration
of
Large
Language
Models
(LLMs)
with
Conversational
User
Interfaces
(CUIs)
has
significantly
transformed
health
information
seeking,
offering
interactive
access
to
resources.Despite
the
importance
trust
in
adopting
advice,
impact
user
interfaces
on
perception
LLM-provided
remains
unclear.Our
mixed-methods
study
investigated
how
different
CUIs
(text-based,
speech-based,
and
embodied)
influence
when
using
an
identical
LLM
source.Key
findings
include
(a)
higher
levels
delivered
via
textbased
interface
compared
others;
(b)
a
significant
correlation
between
provided;
(c)
participant's
prior
experience,
processing
approach
for
modalities
presentation
styles,
usability
level
were
key
determinants
health-related
information.Our
sheds
light
perceptions
from
LLMs
its
dissemination,
underscoring
trustworthy
effective
seeking
LLM-powered
CUIs.
Media Education,
Journal Year:
2024,
Volume and Issue:
15(1), P. 91 - 101
Published: June 12, 2024
This
scoping
review
explores
the
field
of
artificial
intelligence
(AI)
literacy,
focusing
on
tools
available
for
evaluating
individuals’
self-perception
their
AI
literacy.
In
an
era
where
technologies
increasingly
infiltrate
various
aspect
daily
life,
from
healthcare
diagnostics
to
personalized
digital
platforms,
need
a
comprehensive
understanding
literacy
has
never
been
more
critical.
extends
beyond
mere
technical
competence
include
ethical
considerations,
critical
thinking,
and
socio-emotional
skills,
reflecting
complex
interplay
between
societal
norms.
The
synthesizes
findings
diverse
studies,
highlighting
development
validation
processes
several
key
instruments
designed
measure
across
different
dimensions.
These
–
ranging
Artificial
Intelligence
Literacy
Questionnaire
(AILQ)
General
Attitudes
towards
Scale
(GAAIS)
embody
nature
encompassing
affective,
behavioral,
cognitive,
components.
Each
instrument
offers
unique
insights
into
how
individuals
perceive
abilities
understand,
engage
with,
ethically
apply
technologies.
By
examining
these
assessment
tools,
sheds
light
current
landscape
measurement,
underscoring
importance
in
educational
strategies,
personal
growth,
decision-making.
suggest
interventions
policy
formulations
that
address
gaps
perceived
actual
promoting
inclusive,
critically
aware,
competent
engagement
with