interactions,
Год журнала:
2024,
Номер
31(6), С. 44 - 49
Опубликована: Окт. 29, 2024
The
prefrontal
cortex
PFC
is
central
to
flexible,
goal-directed
cognition,
and
understanding
its
representational
code
an
important
problem
in
cognitive
neuroscience.
In
humans,
multivariate
pattern
analysis
MVPA
of
fMRI
blood
oxygenation
level-...
2022 ACM Conference on Fairness, Accountability, and Transparency,
Год журнала:
2024,
Номер
86, С. 1733 - 1744
Опубликована: Июнь 3, 2024
Machine
learning
systems
require
representations
of
the
real
world
for
training
and
testing
-
they
data,
lots
it.
Collecting
data
at
scale
has
logistical
ethical
challenges,
synthetic
promises
a
solution
to
these
challenges.
Instead
needing
collect
photos
people's
faces
train
facial
recognition
system,
model
creator
could
create
use
photo-realistic,
faces.
The
comparative
ease
generating
this
rather
than
relying
on
collecting
made
it
common
practice.
We
present
two
key
risks
using
in
development.
First,
we
detail
high
risk
false
confidence
when
increase
dataset
diversity
representation.
base
examination
use-case
where
datasets
were
generated
an
evaluation
technology.
Second,
examine
how
circumventing
consent
usage.
illustrate
by
considering
importance
U.S.
Federal
Trade
Commission's
regulation
collection
affected
models.
Finally,
discuss
exemplify
complicates
existing
governance
practice;
decoupling
from
those
impacts,
is
prone
consolidating
power
away
most
impacted
algorithmically-mediated
harm.
ACM Transactions on Computing for Healthcare,
Год журнала:
2025,
Номер
unknown
Опубликована: Янв. 23, 2025
This
article
analyzes
how
well
OpenAI's
LLM
GPT-4
can
emulate
different
personalities
and
simulate
populations
to
answer
psychological
questionnaires
similarly
real
population
samples.
For
this
purpose,
we
performed
experiments
with
the
Eysenck
Personality
Questionnaire-Revised
Abbreviated
(EPQR-A)
in
three
languages
(Spanish,
English,
Slovak).
The
EPQR-A
measures
personality
on
four
scales:
extraversion
(E:
sociability),
neuroticism
(N:
emotional
stability),
psychoticism
(P:
tendency
break
social
rules,
not
having
empathy),
lying
(L:
desirability).
We
perform
a
comparative
analysis
of
answers
synthetic
those
two
samples
Spanish
students
as
unconditioned
baseline
GPT.
Furthermore,
impact
time
(what
year
questionnaire
is
answered),
language,
student
age
gender
are
analyzed.
To
our
knowledge,
first
test
has
been
used
assess
GPT´s
language
versions
measured.
Our
reveals
that
exhibits
an
extroverted,
emotionally
stable
low
levels
high
desirability.
replicates
some
differences
observed
terms
but
only
partially
results
for
populations.
As
large
language
models
(LLMs)
become
more
widely
used,
people
increasingly
rely
on
them
to
make
or
advise
moral
decisions.
Some
researchers
even
propose
using
LLMs
as
participants
in
psychology
experiments.
It
is
therefore
important
understand
how
well
decisions
and
they
compare
humans.
We
investigated
this
question
realistic
dilemmas
prompts
where
GPT-4,
Llama
3,
Claude
3
give
advice
emulate
a
research
participant.
In
Study
1,
we
compared
responses
from
representative
US
sample
(N
=
285)
for
22
dilemmas:
social
that
pitted
self-interest
against
the
greater
good,
utilitarian
cost-benefit
reasoning
deontological
rules.
dilemmas,
were
altruistic
than
participants.
exhibited
stronger
omission
bias
participants:
usually
endorsed
inaction
over
action.
2
490,
preregistered),
replicated
document
an
additional
bias:
unlike
humans,
(except
GPT-4o)
tended
answer
``no''
whereby
phrasing
of
influences
decision
when
physical
action
remains
same.
Our
findings
show
LLM
decision-making
amplifies
human
biases
introduces
potentially
problematic
biases.
AStA Wirtschafts- und Sozialstatistisches Archiv,
Год журнала:
2024,
Номер
18(2), С. 131 - 184
Опубликована: Июнь 1, 2024
Abstract
National
Statistical
Organizations
(NSOs)
increasingly
draw
on
Machine
Learning
(ML)
to
improve
the
timeliness
and
cost-effectiveness
of
their
products.
When
introducing
ML
solutions,
NSOs
must
ensure
that
high
standards
with
respect
robustness,
reproducibility,
accuracy
are
upheld
as
codified,
e.g.,
in
Quality
Framework
for
Algorithms
(QF4SA;
Yung
et
al.
2022,
Journal
IAOS
).
At
same
time,
a
growing
body
research
focuses
fairness
pre-condition
safe
deployment
prevent
disparate
social
impacts
practice.
However,
has
not
yet
been
explicitly
discussed
quality
aspect
context
application
at
NSOs.
We
employ
QF4SA
framework
present
mapping
its
dimensions
algorithmic
fairness.
thereby
extend
several
ways:
First,
we
investigate
interaction
each
these
dimensions.
Second,
argue
own,
additional
dimension,
beyond
what
is
contained
so
far.
Third,
emphasize
address
data,
both
own
applied
methodology.
In
parallel
empirical
illustrations,
show
how
our
can
contribute
methodology
domains
official
statistics,
fairness,
trustworthy
machine
learning.
Little
no
prior
knowledge
ML,
statistics
required
provide
introductions
subjects.
These
also
targeted
discussion