Inferring neural activity before plasticity as a foundation for learning beyond backpropagation
Nature Neuroscience,
Journal Year:
2024,
Volume and Issue:
27(2), P. 348 - 358
Published: Jan. 3, 2024
Abstract
For
both
humans
and
machines,
the
essence
of
learning
is
to
pinpoint
which
components
in
its
information
processing
pipeline
are
responsible
for
an
error
output,
a
challenge
that
known
as
‘credit
assignment’.
It
has
long
been
assumed
credit
assignment
best
solved
by
backpropagation,
also
foundation
modern
machine
learning.
Here,
we
set
out
fundamentally
different
principle
on
called
‘prospective
configuration’.
In
prospective
configuration,
network
first
infers
pattern
neural
activity
should
result
from
learning,
then
synaptic
weights
modified
consolidate
change
activity.
We
demonstrate
this
distinct
mechanism,
contrast
(1)
underlies
well-established
family
models
cortical
circuits,
(2)
enables
more
efficient
effective
many
contexts
faced
biological
organisms
(3)
reproduces
surprising
patterns
behavior
observed
diverse
human
rat
experiments.
Language: Английский
Prediction of future input explains lateral connectivity in primary visual cortex
Sebastian Klavinskis-Whiting,
No information about this author
Emil Fristed,
No information about this author
Yosef Singer
No information about this author
et al.
Current Biology,
Journal Year:
2025,
Volume and Issue:
unknown
Published: Jan. 1, 2025
Language: Английский
Balancing prior knowledge and sensory data in a predictive coding model of coherent motion detection
PLoS Computational Biology,
Journal Year:
2025,
Volume and Issue:
21(5), P. e1013116 - e1013116
Published: May 21, 2025
This
study
introduces
a
neurobiologically
inspired
computational
model
based
on
the
predictive
coding
algorithm,
providing
insights
into
coherent
motion
detection
processes.
The
is
designed
to
reflect
key
principles
observed
in
visual
system,
particularly
MT
neurons
and
their
surround
suppression
mechanisms,
which
play
critical
role
detecting
global
motion.
By
integrating
these
principles,
simulates
how
structures
are
decomposed
individual
shared
sources,
mirroring
brain’s
strategy
for
extracting
patterns.
results
obtained
from
random
dot
stimuli
underscore
delicate
balance
between
sensory
data
prior
knowledge
detection.
Model
testing
across
varying
noise
levels
reveals
that,
as
increases,
takes
longer
stabilize
its
estimates,
consistent
with
psychophysical
experiments
showing
that
response
duration
(e.g.,
reaction
time
or
decision-making
time)
also
increases
under
higher
conditions.
suggests
an
excessive
emphasis
prolongs
stabilization
detection,
whereas
optimal
integration
of
expectations
enhances
accuracy
efficiency
by
preventing
disturbances
due
noise.
These
findings
contribute
potential
explanations
deficiencies
schizophrenia.
Language: Английский
Sequential Memory with Temporal Predictive Coding
arXiv (Cornell University),
Journal Year:
2023,
Volume and Issue:
unknown
Published: Jan. 1, 2023
Forming
accurate
memory
of
sequential
stimuli
is
a
fundamental
function
biological
agents.
However,
the
computational
mechanism
underlying
in
brain
remains
unclear.
Inspired
by
neuroscience
theories
and
recent
successes
applying
predictive
coding
(PC)
to
\emph{static}
tasks,
this
work
we
propose
novel
PC-based
model
for
\emph{sequential}
memory,
called
\emph{temporal
coding}
(tPC).
We
show
that
our
tPC
models
can
memorize
retrieve
inputs
accurately
with
biologically
plausible
neural
implementation.
Importantly,
analytical
study
reveals
be
viewed
as
classical
Asymmetric
Hopfield
Network
(AHN)
an
implicit
statistical
whitening
process,
which
leads
more
stable
performance
tasks
structured
inputs.
Moreover,
find
exhibits
properties
consistent
behavioral
observations
neuroscience,
thereby
strengthening
its
relevance.
Our
establishes
possible
also
theoretically
interpreted
using
existing
frameworks.
Language: Английский
Temporal prediction captures key differences between spiking excitatory and inhibitory V1 neurons
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: May 14, 2024
Abstract
Neurons
in
primary
visual
cortex
(V1)
respond
to
natural
scenes
with
a
sparse
and
irregular
spike
code
that
is
carefully
balanced
by
an
interplay
between
excitatory
inhibitory
neurons.
These
neuron
classes
differ
their
statistics,
tuning
preferences,
connectivity
statistics
temporal
dynamics.
To
date,
no
single
computational
principle
has
been
able
account
for
these
properties.
We
developed
recurrently
connected
spiking
network
of
units
trained
efficient
prediction
movie
clips.
found
the
model
exhibited
simple
complex
cell-like
tuning,
V1-like
and,
notably,
also
captured
key
differences
V1
This
suggests
properties
collectively
serve
facilitate
sensory
future.
Language: Английский
Learning probability distributions of sensory inputs with Monte Carlo predictive coding
PLoS Computational Biology,
Journal Year:
2024,
Volume and Issue:
20(10), P. e1012532 - e1012532
Published: Oct. 30, 2024
It
has
been
suggested
that
the
brain
employs
probabilistic
generative
models
to
optimally
interpret
sensory
information.
This
hypothesis
formalised
in
distinct
frameworks,
focusing
on
explaining
separate
phenomena.
On
one
hand,
classic
predictive
coding
theory
proposed
how
can
be
learned
by
networks
of
neurons
employing
local
synaptic
plasticity.
other
neural
sampling
theories
have
demonstrated
stochastic
dynamics
enable
circuits
represent
posterior
distributions
latent
states
environment.
These
frameworks
were
brought
together
variational
filtering
introduced
coding.
Here,
we
consider
a
variant
for
static
inputs,
which
refer
as
Monte
Carlo
(MCPC).
We
demonstrate
integration
with
results
network
learns
precise
using
computation
and
The
MCPC
infer
presence
generate
likely
inputs
their
absence.
Furthermore,
captures
experimental
observations
variability
activity
during
perceptual
tasks.
By
combining
sampling,
account
both
sets
data
previously
had
explained
these
individual
frameworks.
Language: Английский
Predictive and error coding for vocal communication signals in the songbird auditory forebrain
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: Feb. 26, 2024
Abstract
Predictive
coding
posits
that
sensory
signals
are
compared
to
internal
models,
with
resulting
prediction-error
carried
in
the
spiking
responses
of
single
neurons.
Despite
its
proposal
as
a
general
cortical
mechanism,
including
for
speech
processing,
whether
or
how
predictive
functions
single-neuron
vocal
communication
is
unknown.
As
proxy
model,
we
developed
neural
network
uses
current
context
predict
future
spectrotemporal
features
signal,
birdsong.
We
then
represent
birdsong
either
weighted
sets
latent
evolving
time,
time-varying
prediction-errors
reflect
difference
between
ongoing
network-predicted
and
actual
song.
Using
these
spectrotemporal,
predictive,
song
representations,
fit
linear/non-linear
receptive
fields
neuron
recorded
from
caudomedial
nidopallium
(NCM),
caudal
mesopallium
(CMM)
Field
L,
analogs
mammalian
auditory
cortices,
anesthetized
European
starlings,
Sturnus
vulgaris
,
listening
conspecific
songs.
In
all
three
regions,
yield
best
model
song-evoked
responses,
but
unique
information
about
representations
(signal,
prediction,
error)
The
relative
weighting
this
varies
across
contrast
many
computational
models
neither
nor
error
segregated
separate
continuous
interplay
prediction
consistent
relevance
processing
temporally
patterned
signals,
new
integrated
neurons
required.
Language: Английский
Prediction of future input explains lateral connectivity in primary visual cortex
Sebastian Klavinskis-Whiting,
No information about this author
Emil Fristed,
No information about this author
Yosef Singer
No information about this author
et al.
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: June 1, 2024
Neurons
in
primary
visual
cortex
(V1)
show
a
remarkable
functional
specificity
their
pre-
and
postsynaptic
partners.
Recent
work
has
revealed
variety
of
wiring
biases
describing
how
the
short-
long-range
connections
V1
neurons
relate
to
tuning
properties.
However,
it
is
less
clear
whether
these
connectivity
rules
are
based
on
some
underlying
principle
cortical
organization.
Here,
we
that
emerges
naturally
recurrent
neural
network
optimized
predict
upcoming
sensory
inputs
for
natural
stimuli.
This
temporal
prediction
model
reproduces
complex
relationships
between
orientation
direction
preferences,
tendency
highly
connected
respond
more
similarly
movies,
differences
excitatory
inhibitory
populations.
Together,
findings
provide
principled
explanation
anatomical
properties
early
cortex.
Language: Английский
Balancing Prior Knowledge and Sensory Data in a Predictive Coding Model: Insights into Coherent Motion Detection in Schizophrenia
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: June 1, 2024
Abstract
This
study
introduces
a
biologically
plausible
computational
model
based
on
the
predictive
coding
algorithm,
providing
insights
into
motion
detection
processes
and
potential
deficiencies
in
schizophrenia.
The
decomposes
structures
individual
shared
sources,
highlighting
critical
role
of
surround
suppression
detecting
global
motion.
sheds
light
how
brain
extracts
structure
comprehends
or
coherent
within
visual
field.
results
obtained
from
random
dot
stimuli
underscore
delicate
balance
between
sensory
data
prior
knowledge
detection.
Model
testing
across
varying
noise
levels
reveals
longer
convergence
times
with
higher
noise,
consistent
psychophysical
experiments
showing
that
response
duration
(e.g.,
reaction
time
decision-making
time)
also
increases
levels.
suggests
an
excessive
emphasis
extends
Conversely,
for
faster
convergence,
requires
certain
level
to
prevent
disturbance
due
noise.
These
findings
contribute
explanations
observed
Language: Английский
Policy optimization emerges from noisy representation learning
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2024,
Volume and Issue:
unknown
Published: Nov. 3, 2024
A
bstract
Nervous
systems
learn
representations
of
the
world
and
policies
to
act
within
it.
We
present
a
framework
that
uses
reward-dependent
noise
facilitate
policy
opti-
mization
in
representation
learning
networks.
These
networks
balance
extracting
normative
features
task-relevant
information
solve
tasks.
Moreover,
their
changes
reproduce
several
experimentally
observed
shifts
neural
code
during
task
learning.
Our
presents
biologically
plausible
mechanism
for
emergent
optimization
amid
evidence
plays
vital
role
governing
dynamics.
Code
is
available
at:
NeuralThermalOptimization.
Language: Английский