Nature Communications,
Journal Year:
2023,
Volume and Issue:
14(1)
Published: Sept. 27, 2023
Self-organizing
memristive
nanowire
connectomes
have
been
exploited
for
physical
(in
materia)
implementation
of
brain-inspired
computing
paradigms.
Despite
having
shown
that
the
emergent
behavior
relies
on
weight
plasticity
at
single
junction/synapse
level
and
wiring
involving
topological
changes,
a
shift
to
multiterminal
paradigms
is
needed
unveil
dynamics
network
level.
Here,
we
report
tomographical
evidence
memory
engrams
(or
traces)
in
connectomes,
i.e.,
physicochemical
changes
biological
neural
substrates
supposed
endow
representation
experience
stored
brain.
An
experimental/modeling
approach
shows
spatially
correlated
short-term
effects
can
turn
into
long-lasting
engram
patterns
inherently
related
topology
inhomogeneities.
The
ability
exploit
both
encoding
consolidation
information
same
substrate
would
open
radically
new
perspectives
materia
computing,
while
offering
neuroscientists
an
alternative
platform
understand
role
learning
knowledge.
Science Advances,
Journal Year:
2023,
Volume and Issue:
9(16)
Published: April 21, 2023
Nanowire
networks
(NWNs)
mimic
the
brain's
neurosynaptic
connectivity
and
emergent
dynamics.
Consequently,
NWNs
may
also
emulate
synaptic
processes
that
enable
higher-order
cognitive
functions
such
as
learning
memory.
A
quintessential
task
used
to
measure
human
working
memory
is
n-back
task.
In
this
study,
variations
inspired
by
are
implemented
in
a
NWN
device,
external
feedback
applied
brain-like
supervised
reinforcement
learning.
found
retain
information
at
least
n
=
7
steps
back,
remarkably
similar
originally
proposed
"seven
plus
or
minus
two"
rule
for
subjects.
Simulations
elucidate
how
synapse-like
junction
plasticity
depends
on
previous
modifications,
analogous
"synaptic
metaplasticity"
brain,
consolidated
via
strengthening
pruning
of
conductance
pathways.
Neuropsychopharmacology,
Journal Year:
2021,
Volume and Issue:
47(1), P. 20 - 40
Published: Sept. 28, 2021
Abstract
The
fundamental
importance
of
prefrontal
cortical
connectivity
to
information
processing
and,
therefore,
disorders
cognition,
emotion,
and
behavior
has
been
recognized
for
decades.
Anatomic
tracing
studies
in
animals
have
formed
the
basis
delineating
direct
monosynaptic
connectivity,
from
cells
origin,
through
axon
trajectories,
synaptic
terminals.
Advances
neuroimaging
combined
with
network
science
taken
lead
developing
complex
wiring
diagrams
or
connectomes
human
brain.
A
key
question
is
how
well
these
magnetic
resonance
imaging
(MRI)-derived
networks
hubs
reflect
anatomic
“hard
wiring”
first
proposed
underlie
distribution
large-scale
interactions.
In
this
review,
we
address
challenge
by
focusing
on
what
known
about
connections
non-human
primates
compares
MRI-derived
measurements
organization
humans.
First,
outline
pathways
each
cortex
(PFC)
region.
We
then
review
available
MRI-based
techniques
indirectly
measuring
structural
functional
introduce
graph
theoretical
methods
analysis
hubs,
modules,
topologically
integrative
features
connectome.
Finally,
bring
two
approaches
together,
using
specific
examples,
demonstrate
connections,
demonstrated
tract-tracing
studies,
can
directly
inform
understanding
composition
PFC
nodes
edges
that
connect
subcortical
areas.
PLoS Computational Biology,
Journal Year:
2022,
Volume and Issue:
18(11), P. e1010639 - e1010639
Published: Nov. 16, 2022
The
connectivity
of
Artificial
Neural
Networks
(ANNs)
is
different
from
the
one
observed
in
Biological
(BNNs).
Can
wiring
actual
brains
help
improve
ANNs
architectures?
we
learn
about
what
network
features
support
computation
brain
when
solving
a
task?
At
meso/macro-scale
level
connectivity,
ANNs’
architectures
are
carefully
engineered
and
such
those
design
decisions
have
crucial
importance
many
recent
performance
improvements.
On
other
hand,
BNNs
exhibit
complex
emergent
patterns
at
all
scales.
individual
level,
results
development
plasticity
processes,
while
species
adaptive
reconfigurations
during
evolution
also
play
major
role
shaping
connectivity.
Ubiquitous
been
identified
years,
but
their
brain’s
ability
to
perform
concrete
computations
remains
poorly
understood.
Computational
neuroscience
studies
reveal
influence
specific
only
on
abstract
dynamical
properties,
although
implications
real
networks
topologies
machine
learning
or
cognitive
tasks
barely
explored.
Here
present
cross-species
study
with
hybrid
approach
integrating
connectomes
Bio-Echo
State
Networks,
which
use
solve
memory
tasks,
allowing
us
probe
potential
computational
task
solving.
We
find
consistent
across
showing
that
biologically
inspired
as
well
classical
echo
state
networks,
provided
minimum
randomness
diversity
connections
allowed.
framework,
bio2art
,
map
scale
up
can
be
integrated
into
recurrent
ANNs.
This
allows
show
interareal
patterns,
stressing
stochastic
processes
determining
neural
general.
Naturally
occurring
body
movements
and
collective
neural
activity
both
exhibit
complex
dynamics,
often
with
scale-free,
fractal
spatiotemporal
structure.
Scale-free
dynamics
of
brain
behavior
are
important
because
each
is
associated
functional
benefits
to
the
organism.
Despite
their
similarities,
scale-free
have
been
studied
separately,
without
a
unified
explanation.
Here,
we
show
that
mouse
neurons
in
visual
cortex
strongly
related.
Surprisingly,
limited
specific
subsets
neurons,
these
stochastic
winner-take-all
competition
other
subsets.
This
observation
inconsistent
prevailing
theories
systems,
which
stem
from
criticality
hypothesis.
We
develop
computational
model
incorporates
known
cell-type-specific
circuit
structure,
explaining
our
findings
new
type
critical
dynamics.
Our
results
establish
underpinnings
clear
behavioral
relevance
activity.As
go
about
days,
how
do
fidget,
compared
frequently
make
larger
movements,
like
walking
down
hall?
And
rare
trek
across
town
same
walk
Animals
tend
follow
mathematical
law
relates
size
them.
posits
small-to-medium
large-to-huge
related
way,
is,
‘scale-free’,
it
holds
for
different
scales
movement.
measurements
also
this
law:
level
activation
group
they
activated
way
levels
activation.
Although
behave
mathematically
similar
two
facts
had
not
previously
linked.
Jones
et
al.
mice,
found
were
linked
activity,
but
only
certain
neurons.
hidden
compete
When
turn
on,
competing
groups
off.
averaged
together,
fluctuations
cancel
out.
The
provide
understanding
orchestrated
healthy
organisms.
In
particular,
suggest
complex,
multi-scale
nature
may
emerge
operating
at
tipping
point
between
order
disorder,
edge
chaos.
Proceedings of the National Academy of Sciences,
Journal Year:
2023,
Volume and Issue:
120(25)
Published: June 12, 2023
Reservoir
computing
is
a
machine
learning
paradigm
that
transforms
the
transient
dynamics
of
high-dimensional
nonlinear
systems
for
processing
time-series
data.
Although
was
initially
proposed
to
model
information
in
mammalian
cortex,
it
remains
unclear
how
nonrandom
network
architecture,
such
as
modular
cortex
integrates
with
biophysics
living
neurons
characterize
function
biological
neuronal
networks
(BNNs).
Here,
we
used
optogenetics
and
calcium
imaging
record
multicellular
responses
cultured
BNNs
employed
reservoir
framework
decode
their
computational
capabilities.
Micropatterned
substrates
were
embed
architecture
BNNs.
We
first
show
response
static
inputs
can
be
classified
linear
decoder
modularity
positively
correlates
classification
accuracy.
then
timer
task
verify
possess
short-term
memory
several
100
ms
finally
this
property
exploited
spoken
digit
classification.
Interestingly,
BNN-based
reservoirs
allow
categorical
learning,
wherein
trained
on
one
dataset
classify
separate
datasets
same
category.
Such
not
possible
when
directly
decoded
by
decoder,
suggesting
act
generalization
filter
improve
performance.
Our
findings
pave
way
toward
mechanistic
understanding
representation
within
build
future
expectations
realization
physical
based
Proceedings of the National Academy of Sciences,
Journal Year:
2024,
Volume and Issue:
121(2)
Published: Jan. 3, 2024
The
ability
to
concisely
describe
the
dynamical
behavior
of
soft
materials
through
closed-form
constitutive
relations
holds
key
accelerated
and
informed
design
processes.
conventional
approach
is
construct
simplifying
assumptions
approximating
time-
rate-dependent
stress
response
a
complex
fluid
an
imposed
deformation.
While
traditional
frameworks
have
been
foundational
our
current
understanding
materials,
they
often
face
twofold
existential
limitation:
i)
Constructed
on
ideal
generalized
assumptions,
precise
recovery
material-specific
details
usually
serendipitous,
if
possible,
ii)
inherent
biases
that
are
involved
by
making
those
commonly
come
at
cost
new
physical
insight.
This
work
introduces
leveraging
recent
advances
in
scientific
machine
learning
methodologies
discover
governing
equation
from
experimental
data
for
fluids.
Our
rheology-informed
neural
network
framework
found
capable
hidden
rheology
limited
number
experiments.
followed
construction
unbiased
relation
accurately
describes
wide
range
bulk
material.
extremely
efficient
model
discovery
real-world
system,
also
provides
insight
into
underpinning
physics