Reward expectations direct learning and drive operant matching in Drosophila
Proceedings of the National Academy of Sciences,
Journal Year:
2023,
Volume and Issue:
120(39)
Published: Sept. 21, 2023
Foraging
animals
must
use
decision-making
strategies
that
dynamically
adapt
to
the
changing
availability
of
rewards
in
environment.
A
wide
diversity
do
this
by
distributing
their
choices
proportion
received
from
each
option,
Herrnstein's
operant
matching
law.
Theoretical
work
suggests
an
elegant
mechanistic
explanation
for
ubiquitous
behavior,
as
follows
automatically
simple
synaptic
plasticity
rules
acting
within
behaviorally
relevant
neural
circuits.
However,
no
past
has
mapped
onto
mechanisms
brain,
leaving
biological
relevance
theory
unclear.
Here,
we
discovered
Language: Английский
Brain mechanism of foraging: Reward-dependent synaptic plasticity versus neural integration of values
Proceedings of the National Academy of Sciences,
Journal Year:
2024,
Volume and Issue:
121(14)
Published: March 29, 2024
During
foraging
behavior,
action
values
are
persistently
encoded
in
neural
activity
and
updated
depending
on
the
history
of
choice
outcomes.
What
is
mechanism
for
value
maintenance
updating?
Here,
we
explore
two
contrasting
network
models:
synaptic
learning
versus
integration.
We
show
that
both
models
can
reproduce
extant
experimental
data,
but
they
yield
distinct
predictions
about
underlying
biological
circuits.
In
particular,
integrator
model
not
requires
reward
signals
mediated
by
pools
selective
alternatives
their
projections
aligned
with
linear
attractor
axes
valuation
system.
demonstrate
experimentally
observable
dynamical
signatures
feasible
perturbations
to
differentiate
scenarios,
suggesting
a
more
robust
candidate
mechanism.
Overall,
this
work
provides
modeling
framework
guide
future
research
probabilistic
foraging.
Language: Английский
A Neural Circuit Framework for Economic Choice: From Building Blocks of Valuation to Compositionality in Multitasking
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2025,
Volume and Issue:
unknown
Published: March 13, 2025
Abstract
Value-guided
decisions
are
at
the
core
of
reinforcement
learning
and
neuroeconomics,
yet
basic
computations
they
require
remain
poorly
understood
mechanistic
level.
For
instance,
how
does
brain
implement
multiplication
reward
magnitude
by
probability
to
yield
an
expected
value?
Where
within
a
neural
circuit
is
indifference
point
for
comparing
types
encoded?
How
do
learned
values
generalize
novel
options?
Here,
we
introduce
biologically
plausible
model
that
adheres
Dale’s
law
trained
on
five
choice
tasks,
offering
potential
answers
these
questions.
The
captures
key
neurophysiological
observations
from
orbitofrontal
cortex
monkeys
generalizes
offer
values.
Using
single
network
solve
diverse
identified
compositional
representations—quantified
via
task
variance
analysis
corroborated
curriculum
learning.
This
work
provides
testable
predictions
probe
basis
decision
making
its
disruption
in
neuropsychiatric
disorders.
Language: Английский
A global dopaminergic learning rate enables adaptive foraging across many options
Laura L. Grima,
No information about this author
Yipei Guo,
No information about this author
Lakshmi Narayan
No information about this author
et al.
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 4, 2024
Abstract
In
natural
environments,
animals
must
efficiently
allocate
their
choices
across
multiple
concurrently
available
resources
when
foraging,
a
complex
decision-making
process
not
fully
captured
by
existing
models.
To
understand
how
rodents
learn
to
navigate
this
challenge
we
developed
novel
paradigm
in
which
untrained,
water-restricted
mice
were
free
sample
from
six
options
rewarded
at
range
of
deterministic
intervals
and
positioned
around
the
walls
large
(∼2m)
arena.
Mice
exhibited
rapid
learning,
matching
integrated
reward
ratios
within
first
session.
A
reinforcement
learning
model
with
separate
states
for
staying
or
leaving
an
option
dynamic,
global
rate
was
able
accurately
reproduce
mouse
decision-making.
Fiber
photometry
recordings
revealed
that
dopamine
nucleus
accumbens
core
(NAcC),
but
dorsomedial
striatum
(DMS),
more
closely
reflected
than
local
error-based
updating.
Altogether,
our
results
provide
insight
into
neural
substrate
algorithm
allows
rapidly
exploit
foraging
spatial
environments.
Language: Английский