bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2023,
Номер
unknown
Опубликована: Июль 26, 2023
Abstract
Animals
continuously
detect
information
via
multiple
sensory
channels,
like
vision
and
hearing,
integrate
these
signals
to
realise
faster
more
accurate
decisions;
a
fundamental
neural
computation
known
as
multisensory
integration.
A
widespread
view
of
this
process
is
that
multimodal
neurons
linearly
fuse
across
channels.
However,
does
linear
fusion
generalise
beyond
the
classical
tasks
used
explore
integration?
Here,
we
develop
novel
tasks,
which
focus
on
underlying
statistical
relationships
between
deploy
models
at
three
levels
abstraction:
from
probabilistic
ideal
observers
artificial
spiking
networks.
Using
models,
demonstrate
when
provided
by
different
channels
not
independent,
performs
sub-optimally
even
fails
in
extreme
cases.
This
leads
us
propose
simple
nonlinear
algorithm
for
integration
compatible
with
our
current
knowledge
circuits,
excels
naturalistic
settings
optimal
wide
class
tasks.
Thus,
work
emphasises
role
integration,
provides
testable
hypotheses
field
levels:
single
behaviour.
Key
Points
We
introduce
set
based
comodulating
show
In
contrast,
settings,
predator-prey
interactions.
networks
approximate
behaviour
algorithm,
trained
Finally,
how
neuron
properties
allow
fusion.
Trends in Cognitive Sciences,
Год журнала:
2024,
Номер
28(4), С. 352 - 368
Опубликована: Янв. 9, 2024
To
explain
how
the
brain
orchestrates
information-processing
for
cognition,
we
must
understand
information
itself.
Importantly,
is
not
a
monolithic
entity.
Information
decomposition
techniques
provide
way
to
split
into
its
constituent
elements:
unique,
redundant,
and
synergistic
information.
We
review
disentangling
redundant
interactions
redefining
our
understanding
of
integrative
function
neural
organisation.
navigates
trade-offs
between
redundancy
synergy,
converging
evidence
integrating
structural,
molecular,
functional
underpinnings
synergy
redundancy;
their
roles
in
cognition
computation;
they
might
arise
over
evolution
development.
Overall,
provides
guiding
principle
informational
architecture
cognition.
Journal of Cognitive Neuroscience,
Год журнала:
2022,
Номер
35(3), С. 349 - 360
Опубликована: Авг. 25, 2022
Abstract
The
Entangled
Brain
(Pessoa,
L.,
2002.
MIT
Press)
promotes
the
idea
that
we
need
to
understand
brain
as
a
complex,
entangled
system.
Why
does
complex
systems
perspective,
one
entails
emergent
properties,
matter
for
science?
In
fact,
many
neuroscientists
consider
these
ideas
distraction.
We
discuss
three
principles
of
organization
inform
question
interactional
complexity
brain:
(1)
massive
combinatorial
anatomical
connectivity;
(2)
highly
distributed
functional
coordination;
and
(3)
networks/circuits
units.
To
motivate
challenges
mapping
structure
function,
neural
circuits
illustrating
high
typical
in
brain.
potential
avenues
testing
network-level
including
those
relying
on
computations
across
multiple
regions.
implications
science,
characterize
decentralized
heterarchical
anatomical–functional
organization.
view
advocated
has
important
causation,
too,
because
traditional
accounts
causality
provide
poor
candidates
explanation
interactionally
like
given
distributed,
mutual,
reciprocal
nature
interactions.
Ultimately,
make
progress
understanding
how
supports
mental
functions,
dissolve
boundaries
within
brain—those
suggested
be
associated
with
perception,
cognition,
action,
emotion,
motivation—as
well
outside
brain,
bring
down
walls
between
biology,
psychology,
mathematics,
computer
philosophy,
so
on.
Scientific Reports,
Год журнала:
2024,
Номер
14(1)
Опубликована: Янв. 24, 2024
Abstract
Neuroscientists
rely
on
distributed
spatio-temporal
patterns
of
neural
activity
to
understand
how
units
contribute
cognitive
functions
and
behavior.
However,
the
extent
which
reliably
indicates
a
unit's
causal
contribution
behavior
is
not
well
understood.
To
address
this
issue,
we
provide
systematic
multi-site
perturbation
framework
that
captures
time-varying
contributions
elements
collectively
produced
outcome.
Applying
our
intuitive
toy
examples
artificial
networks
revealed
recorded
may
be
generally
informative
their
due
transformations
within
network.
Overall,
findings
emphasize
limitations
inferring
mechanisms
from
activities
offer
rigorous
lesioning
for
elucidating
contributions.
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2024,
Номер
unknown
Опубликована: Фев. 6, 2024
Abstract
Inferring
and
understanding
the
underlying
connectivity
structure
of
a
system
solely
from
observed
activity
its
constituent
components
is
challenge
in
many
areas
science.
In
neuroscience,
techniques
for
estimating
are
paramount
when
attempting
to
understand
network
neural
systems
their
recorded
patterns.
To
date,
no
universally
accepted
method
exists
inference
effective
connectivity,
which
describes
how
node
mechanistically
affects
other
nodes.
Here,
focussing
on
purely
excitatory
networks
small
intermediate
size
continuous
dynamics,
we
provide
systematic
comparison
different
approaches
connectivity.
Starting
with
Hopf
neuron
model
conjunction
known
ground
truth
structural
reconstruct
system’s
matrix
using
variety
algorithms.
We
show
that,
sparse
non-linear
delays,
combining
lagged-cross-correlation
(LCC)
approach
recently
published
derivative-based
covariance
analysis
provides
most
reliable
estimation
matrix.
also
that
linear
networks,
LCC
has
comparable
performance
based
transfer
entropy,
at
drastically
lower
computational
cost.
highlight
works
best
decreases
larger
less
networks.
Applying
dynamics
without
time
find
it
does
not
outperform
methods.
Employing
model,
then
use
estimated
as
basis
forward
simulation
order
recreate
under
certain
conditions,
method,
LCC,
results
higher
trace-to-trace
correlations
than
methods
noise-driven
systems.
Finally,
apply
empirical
biological
data.
subset
nervous
nematode
C.
Elegans
.
computationally
simple
performs
better
another
published,
more
expensive
reservoir
computing-based
method.
Our
comparatively
can
be
used
reliably
estimate
directed
presence
spatio-temporal
delays
noise.
concrete
suggestions
scenario
common
research,
where
only
neuronal
set
neurons
known.
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2025,
Номер
unknown
Опубликована: Фев. 23, 2025
Abstract
Massive
interconnectivity
in
large-scale
neural
networks
is
the
key
feature
underlying
their
powerful
and
complex
functionality.
We
have
developed
hybrid
network
(HNN)
models
that
allow
us
to
find
statistical
structure
this
connectivity.
Describing
critical
for
understanding
biological
artificial
networks.
The
HNNs
are
composed
of
neurons,
a
subset
which
trained
reproduce
responses
individual
neurons
recorded
experimentally.
experimentally
observed
firing
rates
came
from
populations
motor
cortices
monkeys
performing
reaching
task.
After
training,
these
(recurrent
spiking)
underwent
same
state
transitions
as
those
empirical
data,
result
helps
resolve
long-standing
question
prescribed
vs
ongoing
control
volitional
movement.
Because
all
aspects
exposed,
we
were
able
analyze
dynamic
statistics
connections
between
neurons.
Our
results
show
dynamics
extrinsic
input
changed
connectivity
cause
transitions.
Two
processes
at
synaptic
level
recognized:
one
many
different
contributed
buildup
membrane
potential
another
more
specific
triggered
an
action
potential.
facilitate
modeling
realistic
neuron-neuron
provide
foundational
descriptions
Communication
in
brain
networks
is
the
foundation
of
cognitive
function
and
behavior.
A
multitude
evolutionary
pressures,
including
minimization
metabolic
costs
while
maximizing
communication
efficiency,
contribute
to
shaping
structure
dynamics
these
networks.
However,
how
efficiency
characterized
depends
on
assumed
model
dynamics.
Traditional
models
include
shortest
path
signaling,
random
walker
navigation,
broadcasting,
diffusive
processes.
Yet,
a
general
model-agnostic
framework
for
characterizing
optimal
neural
remains
be
established.Our
study
addresses
this
challenge
by
assigning
through
game
theory,
based
combination
structural
data
from
human
cortical
with
computational
We
quantified
exact
influence
exerted
each
node
over
every
other
using
an
exhaustive
multi-site
virtual
lesioning
scheme,
creating
maps
various
These
descriptions
show
patterns
unfold
given
network
if
regions
maximize
their
one
another.
By
comparing
large
variety
models,
we
found
that
most
closely
resembles
broadcasting
which
leverage
multiple
parallel
channels
information
dissemination.
Moreover,
influential
within
cortex
are
formed
its
rich-club.
exploit
topological
vantage
point
across
numerous
pathways,
thereby
significantly
enhancing
effective
reach
even
when
anatomical
connections
weak.Our
work
provides
rigorous
versatile
reveals
features
underlying
communication.
Efficient
communication
in
brain
networks
is
foundational
for
cognitive
function
and
behavior.
However,
how
efficiency
defined
depends
on
the
assumed
model
of
signaling
dynamics,
e.g.,
shortest
path
signaling,
random
walker
navigation,
broadcasting,
diffusive
processes.
Thus,
a
general
model-agnostic
framework
characterizing
optimal
neural
needed.
We
address
this
challenge
by
assigning
through
virtual
multi-site
lesioning
regime
combined
with
game
theory,
applied
to
large-scale
models
human
dynamics.
Our
quantifies
exact
influence
each
node
exerts
over
every
other,
generating
maps
given
underlying
These
descriptions
reveal
patterns
unfold
if
regions
are
set
maximize
their
one
another.
Comparing
these
variety
showed
that
closely
resembles
broadcasting
which
leverage
multiple
parallel
channels
information
dissemination.
Moreover,
we
found
brain’s
most
influential
its
rich-club,
exploiting
topological
vantage
point
across
numerous
pathways
enhance
reach
even
connections
weak.
Altogether,
our
work
provides
rigorous
versatile
communication,
uncovers
regions,
features
influence.
2022 International Joint Conference on Neural Networks (IJCNN),
Год журнала:
2022,
Номер
unknown, С. 1 - 8
Опубликована: Июль 18, 2022
Echo
State
Networks
(ESN)
are
versatile
recurrent
neural
network
models
in
which
the
hidden
layer
remains
unaltered
during
training.
Interactions
among
nodes
of
this
static
backbone
(the
structure)
produce
diverse
representations
(i.e.,
dynamics)
given
stimuli
that
harnessed
by
a
read-out
mechanism
to
perform
computations
needed
for
solving
task
behavior).
Moreover,
ESNs
accessible
neuronal
circuits,
since
they
relatively
inexpensive
train.
Therefore,
have
become
attractive
neuroscientists
studying
relationship
between
structure,
function,
and
behavior.
For
instance,
it
is
not
yet
clear
how
distinctive
connectivity
patterns
brain
networks
(structure)
support
effective
interactions
their
(dynamics)
these
give
rise
computation
(behavior).
To
address
question,
we
employed
an
ESN
with
biologically
inspired
structure
used
systematic
multi-site
lesioning
framework
quantify
causal
contribution
each
node
network's
output,
thus
providing
link
We
then
focused
on
structure-function
decomposed
influence
all
other
nodes,
using
same
framework.
found
properly
engineered
interact
largely
irrespective
underlying
structure.
However,
topology
where
ESN's
leakage
rate
non-optimal
dynamics
diminished,
determine
interactions.
Our
results
suggest
relations
can
be
into
two
components,
direct
indirect
The
former
based
influences
relying
structural
connections.
latter
describe
communication
any
through
intermediate
nodes.
These
widely
distributed
may
crucially
contribute
efficient
performance
ESNs.