The
integration
of
direct
bottom-up
inputs
with
contextual
information
is
a
core
feature
neocortical
circuits.
In
area
V1,
neurons
may
reduce
their
firing
rates
when
receptive
field
input
can
be
predicted
by
spatial
context.
Gamma-synchronized
(30–80
Hz)
provide
complementary
signal
to
rates,
reflecting
stronger
synchronization
between
neuronal
populations
receiving
mutually
predictable
inputs.
We
show
that
large
uniform
surfaces,
which
have
high
predictability,
strongly
suppressed
yet
induced
prominent
gamma
in
macaque
particularly
they
were
colored.
Yet,
chromatic
mismatches
center
and
surround,
breaking
reduced
while
increasing
rates.
Differences
responses
different
colors,
including
strong
gamma-responses
red,
arose
from
stimulus
adaptation
full-screen
background,
suggesting
differences
M-
L-cone
signaling
pathways.
Thus,
synchrony
signaled
whether
RF
context,
increased
stimuli
unpredicted
Trends in Cognitive Sciences,
Год журнала:
2019,
Номер
23(3), С. 235 - 250
Опубликована: Янв. 30, 2019
The
error
back-propagation
algorithm
can
be
approximated
in
networks
of
neurons,
which
plasticity
only
depends
on
the
activity
presynaptic
and
postsynaptic
neurons.
These
biologically
plausible
deep
learning
models
include
both
feedforward
feedback
connections,
allowing
errors
made
by
network
to
propagate
through
layers.
rules
different
implemented
with
types
spike-time-dependent
plasticity.
dynamics
described
within
a
common
framework
energy
minimisation.
This
review
article
summarises
recently
proposed
theories
how
neural
circuits
brain
could
approximate
used
artificial
networks.
Computational
implementing
these
achieve
as
efficient
networks,
but
they
use
simple
synaptic
based
have
similarities,
such
including
information
about
throughout
network.
Furthermore,
incorporate
experimental
evidence
connectivity,
responses,
provide
insights
might
organised
that
modification
weights
multiple
levels
cortical
hierarchy
leads
improved
performance
tasks.
In
past
few
years,
computer
programs
using
(see
Glossary)
achieved
impressive
results
complex
cognitive
tasks
were
previously
reach
humans.
processing
natural
images
language
[1LeCun
Y.
et
al.Deep
learning.Nature.
2015;
521:
436-444Crossref
PubMed
Scopus
(42113)
Google
Scholar],
or
playing
arcade
board
games
[2Mnih
V.
al.Human-level
control
reinforcement
518:
529-533Crossref
(13741)
Scholar,
3Silver
D.
al.Mastering
game
Go
tree
search.Nature.
2016;
529:
484-489Crossref
(8554)
Scholar].
Since
recent
applications
extended
versions
classic
[4Rumelhart
D.E.
al.Learning
representations
back-propagating
errors.Nature.
1986;
323:
533-536Crossref
(15380)
their
success
has
inspired
studies
comparing
brain.
It
been
demonstrated
when
learn
perform
image
classification
navigation,
neurons
layers
develop
similar
those
seen
areas
involved
tasks,
receptive
fields
across
visual
grid
cells
entorhinal
cortex
[5Banino
A.
al.Vector-based
navigation
grid-like
agents.Nature.
2018;
557:
429-433Crossref
(289)
6Whittington,
J.C.R.
al.
(2018)
Generalisation
structural
knowledge
hippocampal-entorhinal
system.
31st
Conference
Neural
Information
Processing
Systems
(NIPS
2018),
MontrealGoogle
7Yamins
D.L.
DiCarlo
J.J.
Using
goal-driven
understand
sensory
cortex.Nat.
Neurosci.
19:
356-365Crossref
(650)
suggests
may
analogous
algorithms.
thanks
current
computational
advances,
now
useful
functions
are
[8Bowers
J.S.
Parallel
distributed
theory
age
networks.Trends
Cogn.
Sci.
2017;
21:
950-961Abstract
Full
Text
PDF
(22)
A
key
question
remains
open
is
implement
describes
connections
should
modified
during
learning,
its
attractiveness,
part,
comes
from
prescribing
weight
changes
reduce
network,
according
theoretical
analysis.
Although
originally
brain,
weights,
appears
unrealistic
[9Crick
F.
excitement
networks.Nature.
1989;
337:
129-132Crossref
(353)
10Grossberg
S.
Competitive
learning:
interactive
activation
adaptive
resonance.Cogn.
1987;
11:
23-63Crossref
Nevertheless,
[11Bengio
al.STDP-Compatible
approximation
backpropagation
an
energy-based
model.Neural
Comput.
29:
555-577Crossref
(47)
12Guerguiev
J.
al.Towards
segregated
dendrites.eLife.
6e22901Crossref
(173)
13Sacramento,
Dendritic
microcircuits
algorithm.
14Whittington
Bogacz
R.
An
predictive
coding
local
Hebbian
plasticity.Neural
1229-1262Crossref
(91)
theoretic
important
because
overrule
dogma,
generally
accepted
for
30
too
complicated
Before
discussing
this
new
generation
detail,
we
first
brief
overview
train
discuss
why
it
was
considered
implausible.
To
effectively
feedback,
often
need
appropriately
adjusted
hierarchical
simultaneously.
For
example,
child
learns
name
letters,
incorrect
pronunciation
combined
result
speech,
associative,
areas.
When
multi-layer
makes
error,
assigns
credit
individual
synapses
all
prescribes
much.
How
networks?
trained
set
examples,
each
consisting
input
pattern
target
pattern.
pair,
generates
prediction
then
minimise
difference
between
predicted
determine
appropriate
modification,
term
computed
neuron
change
discrepancy
(Box
1).
Each
amount
determined
product
projects
to.Box
1Artificial
NetworksA
conventional
consists
layer
receiving
weighted
previous
(Figure
IA).
propagating
layers,
Equation
1.1,
where
xl
vector
denoting
l
Wl−1
matrix
−
1
l.
function
f
applied
allow
nonlinear
computations.During
cost
quantifying
patterns
(typically
defined
1.2).
particular,
direction
steepest
decrease
(or
gradient)
ID).
Such
1.3,
δl+1
terms
associated
xl+1.
last
L
1.4
t
activity.
Thus,
output
positive
if
higher
than
earlier
1.5
sum
above
strengths
(and
further
scaled
derivative
function;
·
denotes
element-wise
multiplication).
hidden
unit
sends
excitatory
projections
units
high
terms,
so
increasing
would
output.
Once
computed,
changed
1.3
proportion
neuron.
computations.
During
procedure
steps
take
place
case
naming
letters
mentioned
above,
corresponds
letter.
After
seeing
image,
guess
at
(predicted
pattern)
via
speech
On
supervision
his
her
parent
correct
(target
pattern),
along
stream
more
likely
sound
will
produced
again.
algorithmic
process
enough,
there
problems
biology.
Below,
briefly
three
issues.
Conventional
compute
forward
direction,
separately
external
Without
representation,
update
computations
downstream
biological
connection
strength
solely
signals
(e.g.,
connect),
unclear
afforded
Historically,
major
criticism;
thus
main
focus
our
article.
back-propagated
same
prediction.
symmetry
identical
exist
directions
connected
bidirectional
significantly
expected
chance,
not
always
present
[15Song
al.Highly
nonrandom
features
connectivity
circuits.PLoS
Biol.
2005;
3:
507-519Google
even
present,
backwards
forwards
still
correctly
align
themselves.
Artificial
send
continuous
(corresponding
firing
rate
neurons),
whereas
real
spikes.
Generalising
discrete
spikes
trivial,
derivate
found
Away
algorithm,
description
inside
also
simplified
linear
summation
inputs.
above-mentioned
issues
investigated
studies.
lack
representation
addressed
early
proposing
instead
driven
global
signal
carried
neuromodulators
[16Mazzoni
P.
al.A
rule
networks.Proc.
Natl.
Acad.
U.
1991;
88:
4433-4437Crossref
(138)
17Williams
R.J.
Simple
statistical
gradient-following
algorithms
connectionist
learning.Mach.
Learn.
1992;
8:
229-256Crossref
18Unnikrishnan
K.P.
Venugopal
Alopex:
correlation-based
recurrent
networks.Neural
1994;
6:
469-490Crossref
19Seung
H.S.
Learning
spiking
stochastic
transmission.Neuron.
2003;
40:
1063-1073Abstract
(238)
However,
slow
does
scale
size
[20Werfel
curves
gradient
descent
17:
2699-2718Crossref
More
promisingly,
several
do
represent
locally
closely
similarly
standard
benchmark
handwritten
digit
classification)
[12Guerguiev
21Lillicrap
T.P.
al.Random
support
learning.Nat.
Commun.
713276Crossref
(336)
22Scellier
B.
Bengio
Equilibrium
propagation:
bridging
gap
backpropagation.Front.
24Crossref
(146)
summarise
them
detail
following
sections.
criticism
demonstrating
random
good
[21Lillicrap
23Zenke
Ganguli
SuperSpike:
supervised
multilayer
30:
1514-1541Crossref
(209)
24Mostafa,
H.
(2017)
Deep
errors.
arXiv
preprint
arXiv:1711.06756Google
25Scellier,
Generalization
equilibrium
propagation
field
dynamics.
1808.04873Google
26Liao,
Q.
(2016)
backpropagation?
AAAI
Intelligence,
pp.
1837–1844,
AAAIGoogle
27Baldi
Sadowski
channel,
optimality
backpropagation.Neural
Netw.
83:
51-74Crossref
(39)
being
said,
some
concern
regarding
issue
[28Bartunov,
Assessing
scalability
biologically-motivated
architectures.
With
regard
realism
shown
generalised
producing
[29Sporea
I.
Grüning
Supervised
2013;
25:
473-509Crossref
(97)
Scholar]
calculating
derivatives
overcome
[23Zenke
realistic
considered,
themselves
small
dendritic
structures
[30Schiess
M.
al.Somato-dendritic
error-backpropagation
active
dendrites.PLoS
12e1004638Crossref
(43)
There
diversity
ideas
[31Balduzzi,
(2015)
Kickback
cuts
backprop's
red-tape:
assignment
485–491,
32Krotov,
Hopfield,
Unsupervised
competing
units.
arXiv:1806.10181Google
33Kuśmierz
Ł.
factors:
modulating
errors.Curr.
Opin.
Neurobiol.
46:
170-177Crossref
(52)
34Marblestone
A.H.
al.Toward
integration
neuroscience.Front.
10:
94Crossref
(316)
35Bengio,
(2014)
auto-encoders
propagation.
arXiv:1407.7906Google
36Lee,
D.-H.
Difference
Joint
European
Machine
Knowledge
Discovery
Databases,
498–515,
SpringerGoogle
Scholar];
however,
principles
behind
related
37O'Reilly
R.C.
Biologically
error-driven
differences:
generalized
recirculation
algorithm.Neural
1996;
895-938Crossref
(211)
substantial
data
while
paralleling
operate
minimal
control,
modifications
depend
biology,
spike
time-dependent
plasticity,
properties
pyramidal
microcircuits.
We
emphasise
rely
fundamentally
principles.
thereby
without
requiring
program
dynamics,
well
divide
reviewed
two
classes
differing
represented,
class
model
encodes
differences
time.
contrastive
[37O'Reilly
relies
observation
proportional
(difference
decomposed
into
separate
updates:
one
other
provided
[38Ackley
D.H.
Boltzmann
machines.Cogn.
1985;
9:
147-169Crossref
2).
twice:
anti-Hebbian
once
converges
(after
propagated
connections)
role
'unlearn'
existing
association
prediction,
second
target.Box
2Temporal-Error
ModelsTemporal-error
describe
nodes
given
node
summed
inputs
adjacent
decay
level
IB).
As
recurrent,
no
longer
possible
write
equation
describing
(such
1.1
Box
1);
instead,
differential
2.1
[72Pineda
F.J.
networks.Phys.
Rev.
Lett.
59:
2229-2232Crossref
(594)
x˙l
over
time
(all
equations
figure
ignore
nonlinearities
brevity).In
model,
occurring
times.
easiest
consider
connecting
modified.
Substituting
see
2.2
required
terms.
O'Reilly
presence
backward
propagates
sequence
approximates
version
Scholar].In
gradually
(x3|¬t)
towards
values
(t),
sample
Figure
ID.
temporal
(x˙3)
(t
−x3|¬t),
is,
(defined
1.4).
Hence,
simply
equal
(Equation
2.3).
Temporal-error
brevity).
o
Annals of the New York Academy of Sciences,
Год журнала:
2020,
Номер
1464(1), С. 242 - 268
Опубликована: Март 1, 2020
Abstract
For
many
years,
the
dominant
theoretical
framework
guiding
research
into
neural
origins
of
perceptual
experience
has
been
provided
by
hierarchical
feedforward
models,
in
which
sensory
inputs
are
passed
through
a
series
increasingly
complex
feature
detectors.
However,
long‐standing
orthodoxy
these
accounts
recently
challenged
radically
different
set
theories
that
contend
perception
arises
from
purely
inferential
process
supported
two
distinct
classes
neurons:
those
transmit
predictions
about
states
and
signal
information
deviates
predictions.
Although
predictive
processing
(PP)
models
have
become
influential
cognitive
neuroscience,
they
also
criticized
for
lacking
empirical
support
to
justify
their
status.
This
limited
evidence
base
partly
reflects
considerable
methodological
challenges
presented
when
trying
test
unique
models.
confluence
technological
advances
prompted
recent
surge
human
nonhuman
neurophysiological
seeking
fill
this
gap.
Here,
we
will
review
new
evaluate
degree
its
findings
key
claims
PP.
Trends in Cognitive Sciences,
Год журнала:
2020,
Номер
24(10), С. 814 - 825
Опубликована: Авг. 24, 2020
Recent
breakthroughs
in
neurobiology
indicate
that
the
time
is
ripe
to
understand
how
cellular-level
mechanisms
are
related
conscious
experience.
Here,
we
highlight
biophysical
properties
of
pyramidal
cells,
which
allow
them
act
as
gates
control
evolution
global
activation
patterns.
In
states,
this
cellular
mechanism
enables
complex
sustained
dynamics
within
thalamocortical
system,
whereas
during
unconscious
such
signal
propagation
prohibited.
We
suggest
hallmark
processing
flexible
integration
bottom-up
and
top-down
data
streams
at
level.
This
provides
foundation
for
Dendritic
Information
Theory,
a
novel
neurobiological
theory
consciousness.
Analytical Chemistry,
Год журнала:
2020,
Номер
92(12), С. 7998 - 8004
Опубликована: Июнь 8, 2020
Fallacies
about
the
nature
of
biases
have
shadowed
a
proper
cognitive
understanding
and
their
sources,
which
in
turn
lead
to
ways
that
minimize
impact.
Six
such
fallacies
are
presented:
it
is
an
ethical
issue,
only
applies
"bad
apples",
experts
impartial
immune,
technology
eliminates
bias,
blind
spot,
illusion
control.
Then,
eight
sources
bias
discussed
conceptualized
within
three
categories:
(A)
factors
relate
specific
case
analysis,
include
data,
reference
materials,
contextual
information,
(B)
person
doing
past
experience
base
rates,
organizational
factors,
education
training,
personal
lastly,
(C)
architecture
human
impacts
all
us.
These
can
impact
what
data
(e.g.,
how
sampled
collected,
or
considered
as
noise
therefore
disregarded),
actual
results
decisions
on
testing
strategies,
analysis
conducted,
when
stop
testing),
conclusions
interpretation
results).
The
paper
concludes
with
measures
these
biases.
Progress in Neurobiology,
Год журнала:
2020,
Номер
192, С. 101821 - 101821
Опубликована: Май 21, 2020
The
hippocampus
is
crucial
for
episodic
memory,
but
it
also
involved
in
online
prediction.
Evidence
suggests
that
a
unitary
hippocampal
code
underlies
both
memory
and
predictive
processing,
yet
within
coding
framework
the
hippocampal-neocortical
interactions
accompany
these
two
phenomena
are
distinct
opposing.
Namely,
during
recall,
thought
to
exert
an
excitatory
influence
on
neocortex,
reinstate
activity
patterns
across
cortical
circuits.
This
contrasts
with
empirical
theoretical
work
where
descending
predictions
suppress
prediction
errors
'explain
away'
ascending
inputs
via
inhibition.
In
this
hypothesis
piece,
we
attempt
dissolve
previously
overlooked
dialectic.
We
consider
how
may
facilitate
respectively,
by
inhibiting
neocortical
or
increasing
their
gain.
propose
processing
modes
depend
upon
neuromodulatory
gain
(or
precision)
ascribed
error
units.
Within
framework,
recall
cast
as
arising
from
fictive
furnish
training
signals
optimise
generative
models
of
world,
absence
sensory
data.