PLoS Computational Biology,
Journal Year:
2024,
Volume and Issue:
20(1), P. e1011792 - e1011792
Published: Jan. 10, 2024
Geometric
descriptions
of
deep
neural
networks
(DNNs)
have
the
potential
to
uncover
core
representational
principles
computational
models
in
neuroscience.
Here
we
examined
geometry
DNN
visual
cortex
by
quantifying
latent
dimensionality
their
natural
image
representations.
A
popular
view
holds
that
optimal
DNNs
compress
representations
onto
low-dimensional
subspaces
achieve
invariance
and
robustness,
which
suggests
better
should
lower
dimensional
geometries.
Surprisingly,
found
a
strong
trend
opposite
direction-neural
with
high-dimensional
tended
generalization
performance
when
predicting
cortical
responses
held-out
stimuli
both
monkey
electrophysiology
human
fMRI
data.
Moreover,
high
was
associated
learning
new
categories
stimuli,
suggesting
higher
are
suited
generalize
beyond
training
domains.
These
findings
suggest
general
principle
whereby
confers
benefits
cortex.
Journal of Neuroscience,
Journal Year:
2023,
Volume and Issue:
43(10), P. 1731 - 1741
Published: Feb. 9, 2023
Deep
neural
networks
(DNNs)
are
promising
models
of
the
cortical
computations
supporting
human
object
recognition.
However,
despite
their
ability
to
explain
a
significant
portion
variance
in
data,
agreement
between
and
brain
representational
dynamics
is
far
from
perfect.
We
address
this
issue
by
asking
which
features
currently
unaccounted
for
time
series
estimated
multiple
areas
ventral
stream
via
source-reconstructed
magnetoencephalography
data
acquired
participants
(nine
females,
six
males)
during
viewing.
focus
on
visuo-semantic
models,
consisting
human-generated
labels
categories,
beyond
explanatory
power
DNNs
alone.
report
gradual
reversal
relative
importance
DNN
versus
as
ventral-stream
representations
unfold
over
space
time.
Although
lower-level
visual
better
explained
starting
early
(at
66
ms
after
stimulus
onset),
higher-level
best
accounted
later
146
onset).
Among
features,
parts
basic
categories
drive
advantage
DNNs.
These
results
show
that
component
unexplained
structured
can
be
readily
nameable
aspects
objects.
conclude
current
fail
fully
capture
dynamic
cortex
suggest
path
toward
more
accurate
computations.
SIGNIFICANCE
STATEMENT
When
we
view
objects
such
faces
cars
our
environment,
dynamically
at
millisecond
scale.
reflect
support
fast
robust
have
emerged
framework
modeling
these
but
cannot
yet
account
dynamics.
Using
observers
viewing,
objects,
'eye',
'wheel',
'face',
above
findings
humans
may
part
rely
different
recognition
provide
guidelines
model
improvement.
Journal of Vision,
Journal Year:
2023,
Volume and Issue:
23(7), P. 4 - 4
Published: July 6, 2023
In
laboratory
object
recognition
tasks
based
on
undistorted
photographs,
both
adult
humans
and
deep
neural
networks
(DNNs)
perform
close
to
ceiling.
Unlike
adults’,
whose
performance
is
robust
against
a
wide
range
of
image
distortions,
DNNs
trained
standard
ImageNet
(1.3M
images)
poorly
distorted
images.
However,
the
last
2
years
have
seen
impressive
gains
in
DNN
distortion
robustness,
predominantly
achieved
through
ever-increasing
large-scale
datasets—orders
magnitude
larger
than
ImageNet.
Although
this
simple
brute-force
approach
very
effective
achieving
human-level
robustness
DNNs,
it
raises
question
whether
human
too,
simply
due
extensive
experience
with
(distorted)
visual
input
during
childhood
beyond.
Here
we
investigate
by
comparing
core
146
children
(aged
4–15
years)
adults
DNNs.
We
find,
first,
that
already
4-
6-year-olds
show
remarkable
distortions
outperform
Second,
estimated
number
images
had
been
exposed
their
lifetime.
Compared
various
children’s
high
requires
relatively
little
data.
Third,
when
recognizing
objects,
children—like
but
unlike
DNNs—rely
heavily
shape
not
texture
cues.
Together
our
results
suggest
emerges
early
developmental
trajectory
unlikely
result
mere
accumulation
input.
Even
though
current
match
regarding
they
seem
rely
different
more
data-hungry
strategies
do
so.
Imaging Neuroscience,
Journal Year:
2024,
Volume and Issue:
2, P. 1 - 35
Published: April 1, 2024
Abstract
In
recent
years,
brain
research
has
indisputably
entered
a
new
epoch,
driven
by
substantial
methodological
advances
and
digitally
enabled
data
integration
modelling
at
multiple
scales—from
molecules
to
the
whole
brain.
Major
are
emerging
intersection
of
neuroscience
with
technology
computing.
This
science
combines
high-quality
research,
across
scales,
culture
multidisciplinary
large-scale
collaboration,
translation
into
applications.
As
pioneered
in
Europe’s
Human
Brain
Project
(HBP),
systematic
approach
will
be
essential
for
meeting
coming
decade’s
pressing
medical
technological
challenges.
The
aims
this
paper
to:
develop
concept
decade
digital
discuss
community
large,
identify
points
convergence,
derive
therefrom
scientific
common
goals;
provide
framework
current
future
development
EBRAINS,
infrastructure
resulting
from
HBP’s
work;
inform
engage
stakeholders,
funding
organisations
institutions
regarding
research;
address
transformational
potential
comprehensive
models
artificial
intelligence,
including
machine
learning
deep
learning;
outline
collaborative
that
integrates
reflection,
dialogues,
societal
engagement
on
ethical
opportunities
challenges
as
part
research.
Cell,
Journal Year:
2024,
Volume and Issue:
187(7), P. 1745 - 1761.e19
Published: March 1, 2024
Proprioception
tells
the
brain
state
of
body
based
on
distributed
sensory
neurons.
Yet,
principles
that
govern
proprioceptive
processing
are
poorly
understood.
Here,
we
employ
a
task-driven
modeling
approach
to
investigate
neural
code
neurons
in
cuneate
nucleus
(CN)
and
somatosensory
cortex
area
2
(S1).
We
simulated
muscle
spindle
signals
through
musculoskeletal
generated
large-scale
movement
repertoire
train
networks
16
hypotheses,
each
representing
different
computational
goals.
found
emerging,
task-optimized
internal
representations
generalize
from
synthetic
data
predict
dynamics
CN
S1
primates.
Computational
tasks
aim
limb
position
velocity
were
best
at
predicting
activity
both
areas.
Since
task
optimization
develops
better
during
active
than
passive
movements,
postulate
is
top-down
modulated
goal-directed
movements.
PLoS Computational Biology,
Journal Year:
2024,
Volume and Issue:
20(1), P. e1011792 - e1011792
Published: Jan. 10, 2024
Geometric
descriptions
of
deep
neural
networks
(DNNs)
have
the
potential
to
uncover
core
representational
principles
computational
models
in
neuroscience.
Here
we
examined
geometry
DNN
visual
cortex
by
quantifying
latent
dimensionality
their
natural
image
representations.
A
popular
view
holds
that
optimal
DNNs
compress
representations
onto
low-dimensional
subspaces
achieve
invariance
and
robustness,
which
suggests
better
should
lower
dimensional
geometries.
Surprisingly,
found
a
strong
trend
opposite
direction-neural
with
high-dimensional
tended
generalization
performance
when
predicting
cortical
responses
held-out
stimuli
both
monkey
electrophysiology
human
fMRI
data.
Moreover,
high
was
associated
learning
new
categories
stimuli,
suggesting
higher
are
suited
generalize
beyond
training
domains.
These
findings
suggest
general
principle
whereby
confers
benefits
cortex.