Psychological Science,
Journal Year:
2021,
Volume and Issue:
32(5), P. 668 - 681
Published: April 16, 2021
Social
cohesion
relies
on
prosociality
in
increasingly
aging
populations.
Helping
other
people
requires
effort,
yet
how
willing
are
to
exert
effort
benefit
themselves
and
others,
whether
such
behaviors
shift
across
the
life
span,
is
poorly
understood.
Using
computational
modeling,
we
tested
willingness
of
95
younger
adults
(18–36
years
old)
92
older
(55–84
put
physical
into
self-
other-benefiting
acts.
Participants
chose
work
force
(30%–70%
maximum
grip
strength)
for
rewards
(2–10
credits)
accrued
or,
prosocially,
another.
Younger
were
somewhat
selfish,
choosing
more
at
higher
levels
themselves,
exerted
less
prosocial
work.
Strikingly,
compared
with
adults,
others
equal
others.
Increased
has
important
implications
human
behavior
societal
structure.
Computational
modeling
of
behavior
has
revolutionized
psychology
and
neuroscience.
By
fitting
models
to
experimental
data
we
can
probe
the
algorithms
underlying
behavior,
find
neural
correlates
computational
variables
better
understand
effects
drugs,
illness
interventions.
But
with
great
power
comes
responsibility.
Here,
offer
ten
simple
rules
ensure
that
is
used
care
yields
meaningful
insights.
In
particular,
present
a
beginner-friendly,
pragmatic
details-oriented
introduction
on
how
relate
data.
What,
exactly,
model
tell
us
about
mind?
To
answer
this,
apply
our
simplest
techniques
most
accessible
beginning
modelers
illustrate
them
examples
code
available
online.
However,
more
advanced
techniques.
Our
hope
by
following
guidelines,
researchers
will
avoid
many
pitfalls
unleash
their
own
Psychological Inquiry,
Journal Year:
2020,
Volume and Issue:
31(4), P. 271 - 288
Published: Oct. 1, 2020
The
applied
social
science
literature
using
factor
and
network
models
continues
to
grow
rapidly.
Most
work
reads
like
an
exercise
in
model
fitting,
falls
short
of
theory
building
testing
three
ways.
First,
statistical
theoretical
are
conflated,
leading
invalid
inferences
such
as
the
existence
psychological
constructs
based
on
models,
or
recommendations
for
clinical
interventions
models.
I
demonstrate
this
inferential
gap
a
simulation:
excellent
fit
does
little
corroborate
theory,
regardless
quality
quantity
data.
Second,
researchers
fail
explicate
theories
about
constructs,
but
use
implicit
causal
beliefs
guide
inferences.
These
latent
have
led
problematic
best
practices.
Third,
explicated
often
weak
theories:
imprecise
descriptions
vulnerable
hidden
assumptions
unknowns.
Such
do
not
offer
precise
predictions,
it
is
unclear
whether
effects
actually
not.
that
these
challenges
common
harmful,
impede
formation,
failure,
reform.
Matching
necessary
bring
data
bear
theories,
renewed
focus
psychology
formalizing
offers
way
forward.
Perceptual
choices
depend
not
only
on
the
current
sensory
input
but
also
behavioral
context,
such
as
history
of
one's
own
choices.
Yet,
it
remains
unknown
how
signals
shape
dynamics
later
decision
formation.
In
models
formation,
is
commonly
assumed
that
choice
shifts
starting
point
accumulation
toward
bound
reflecting
previous
choice.
We
here
present
results
challenge
this
idea.
fit
bounded-accumulation
to
human
perceptual
data,
and
estimated
bias
parameters
depended
observers'
Across
multiple
task
protocols
modalities,
individual
biases
in
overt
behavior
were
consistently
explained
by
a
history-dependent
change
evidence
accumulation,
rather
than
its
point.
Choice
thus
seem
interpretation
input,
akin
shifting
endogenous
attention
(or
away
from)
previously
selected
interpretation.
PLoS Computational Biology,
Journal Year:
2017,
Volume and Issue:
13(8), P. e1005684 - e1005684
Published: Aug. 11, 2017
Previous
studies
suggest
that
factual
learning,
is,
learning
from
obtained
outcomes,
is
biased,
such
participants
preferentially
take
into
account
positive,
as
compared
to
negative,
prediction
errors.
However,
whether
or
not
the
error
valence
also
affects
counterfactual
forgone
unknown.
To
address
this
question,
we
analysed
performance
of
two
groups
on
reinforcement
tasks
using
a
computational
model
was
adapted
test
if
influences
learning.
We
carried
out
experiments:
in
experiment,
learned
partial
feedback
(i.e.,
outcome
chosen
option
only);
complete
information
outcomes
both
and
unchosen
were
displayed).
In
replicated
previous
findings
valence-induced
bias,
whereby
relative
contrast,
for
found
opposite
bias:
negative
errors
taken
account,
positive
ones.
When
considering
bias
context
it
appears
people
tend
confirms
their
current
choice.
Nature Communications,
Journal Year:
2021,
Volume and Issue:
12(1)
Published: Feb. 26, 2021
Social
media
has
become
a
modern
arena
for
human
life,
with
billions
of
daily
users
worldwide.
The
intense
popularity
social
is
often
attributed
to
psychological
need
rewards
(likes),
portraying
the
online
world
as
Skinner
Box
human.
Yet
despite
such
portrayals,
empirical
evidence
engagement
reward-based
behavior
remains
scant.
Here,
we
apply
computational
approach
directly
test
whether
reward
learning
mechanisms
contribute
behavior.
We
analyze
over
one
million
posts
from
4000
individuals
on
multiple
platforms,
using
models
based
reinforcement
theory.
Our
results
consistently
show
that
conforms
qualitatively
and
quantitatively
principles
learning.
Specifically,
spaced
their
maximize
average
rate
accrued
rewards,
in
manner
subject
both
effort
cost
posting
opportunity
inaction.
Results
further
reveal
meaningful
individual
difference
profiles
media.
Finally,
an
experiment
(n
=
176),
mimicking
key
aspects
media,
verifies
causally
influence
posited
by
our
account.
Together,
these
findings
support
account
offer
new
insights
into
this
emergent
mode
Reinforcement
Learning
(RL)
models
have
revolutionized
the
cognitive
and
brain
sciences,
promising
to
explain
behavior
from
simple
conditioning
complex
problem
solving,
shed
light
on
developmental
individual
differences,
anchor
processes
in
specific
mechanisms.
However,
RL
literature
increasingly
reveals
contradictory
results,
which
might
cast
doubt
these
claims.
We
hypothesized
that
many
contradictions
arise
two
commonly-held
assumptions
about
computational
model
parameters
are
actually
often
invalid:
That
generalize
between
contexts
(e.g.
tasks,
models)
they
capture
interpretable
(i.e.
unique,
distinctive)
neurocognitive
processes.
To
test
this,
we
asked
291
participants
aged
8–30
years
complete
three
learning
tasks
one
experimental
session,
fitted
each.
found
some
(exploration
/
decision
noise)
showed
significant
generalization:
followed
similar
trajectories,
were
reciprocally
predictive
tasks.
Still,
generalization
was
significantly
below
methodological
ceiling.
Furthermore,
other
(learning
rates,
forgetting)
did
not
show
evidence
of
generalization,
sometimes
even
opposite
trajectories.
Interpretability
low
for
all
parameters.
conclude
systematic
study
context
factors
reward
stochasticity;
task
volatility)
will
be
necessary
enhance
generalizability
interpretability
models.