Frontiers in Neuroinformatics,
Год журнала:
2025,
Номер
19
Опубликована: Май 6, 2025
In
cognitive
neuroscience,
the
integration
of
deep
neural
networks
(DNNs)
with
traditional
neuroscientific
analyses
has
significantly
advanced
our
understanding
both
biological
processes
and
functioning
DNNs.
However,
challenges
remain
in
effectively
comparing
representational
spaces
artificial
models
brain
data,
particularly
due
to
growing
variety
specific
demands
neuroimaging
research.
To
address
these
challenges,
we
present
Net2Brain,
a
Python-based
toolbox
that
provides
an
end-to-end
pipeline
for
incorporating
DNNs
into
neuroscience
research,
encompassing
dataset
download,
large
selection
models,
feature
extraction,
evaluation,
visualization.
Net2Brain
functionalities
four
key
areas.
First,
it
offers
access
over
600
trained
on
diverse
tasks
across
multiple
modalities,
including
vision,
language,
audio,
multimodal
organized
through
carefully
structured
taxonomy.
Second,
streamlined
API
downloading
handling
popular
datasets,
such
as
NSD
THINGS
dataset,
allowing
researchers
easily
corresponding
data.
Third,
facilitates
wide
range
analysis
options,
similarity
(RSA),
linear
encoding,
while
also
supporting
techniques
like
variance
partitioning
searchlight
analysis.
Finally,
integrates
seamlessly
other
established
open
source
libraries,
enhancing
interoperability
promoting
collaborative
By
simplifying
model
selection,
data
processing,
empowers
conduct
more
robust,
flexible,
reproducible
investigations
relationships
between
representations.
Understanding
object
representations
requires
a
broad,
comprehensive
sampling
of
the
objects
in
our
visual
world
with
dense
measurements
brain
activity
and
behavior.
Here,
we
present
THINGS-data,
multimodal
collection
large-scale
neuroimaging
behavioral
datasets
humans,
comprising
densely
sampled
functional
MRI
magnetoencephalographic
recordings,
as
well
4.70
million
similarity
judgments
response
to
thousands
photographic
images
for
up
1,854
concepts.
THINGS-data
is
unique
its
breadth
richly
annotated
objects,
allowing
testing
countless
hypotheses
at
scale
while
assessing
reproducibility
previous
findings.
Beyond
insights
promised
by
each
individual
dataset,
multimodality
allows
combining
much
broader
view
into
processing
than
previously
possible.
Our
analyses
demonstrate
high
quality
provide
five
examples
hypothesis-driven
data-driven
applications.
constitutes
core
public
release
THINGS
initiative
(https://things-initiative.org)
bridging
gap
between
disciplines
advancement
cognitive
neuroscience.
iScience,
Год журнала:
2025,
Номер
28(3), С. 112029 - 112029
Опубликована: Фев. 15, 2025
Whether
one
person's
subjective
experience
of
the
"redness"
red
is
equivalent
to
another's
a
fundamental
question
in
consciousness
studies.
Intersubjective
comparison
relational
structures
sensory
experiences,
termed
"qualia
structures",
can
constrain
question.
We
propose
an
unsupervised
alignment
method,
based
on
optimal
transport,
find
mapping
between
similarity
experiences
without
presupposing
correspondences
(such
as
"red-to-red").
After
collecting
judgments
for
93
colors,
we
showed
that
derived
from
color-neurotypical
participants
be
"correctly"
aligned
at
group
level.
In
contrast,
those
color-blind
could
not
with
participants.
Our
results
provide
quantitative
evidence
interindividual
structural
equivalence
or
difference
color
qualia,
implying
people's
"red"
relationally
other
color-neurotypical's
"red",
but
"red".
This
method
applicable
across
modalities,
enabling
general
exploration
experiences.
The
human
ventral
visual
stream
has
a
highly
systematic
organization
of
object
information,
but
the
causal
pressures
driving
these
topographic
motifs
are
debated.
Here,
we
use
self-organizing
principles
to
learn
representation
data
manifold
deep
neural
network
representational
space.
We
find
that
smooth
mapping
this
space
showed
many
brain-like
motifs,
with
large-scale
by
animacy
and
real-world
size,
supported
mid-level
feature
tuning,
naturally
emerging
face-
scene-selective
regions.
While
some
theories
object-selective
cortex
posit
differently
tuned
regions
brain
reflect
collection
distinctly
specified
functional
modules,
present
work
provides
computational
support
for
an
alternate
hypothesis
tuning
topography
unified
Behavior Research Methods,
Год журнала:
2023,
Номер
56(3), С. 1583 - 1603
Опубликована: Апрель 24, 2023
To
study
visual
and
semantic
object
representations,
the
need
for
well-curated
concepts
images
has
grown
significantly
over
past
years.
address
this,
we
have
previously
developed
THINGS,
a
large-scale
database
of
1854
systematically
sampled
with
26,107
high-quality
naturalistic
these
concepts.
With
THINGSplus,
extend
THINGS
by
adding
concept-
image-specific
norms
metadata
all
one
copyright-free
image
example
per
concept.
Concept-specific
were
collected
properties
real-world
size,
manmadeness,
preciousness,
liveliness,
heaviness,
naturalness,
ability
to
move
or
be
moved,
graspability,
holdability,
pleasantness,
arousal.
Further,
provide
53
superordinate
categories
as
well
typicality
ratings
their
members.
Image-specific
includes
nameability
measure,
based
on
human-generated
labels
objects
depicted
in
images.
Finally,
identified
new
public
domain
Property
(M
=
0.97,
SD
0.03)
0.01)
demonstrate
excellent
consistency,
subsequently
arousal
only
exception
(r
0.69).
Our
property
0.85,
0.11)
0.72,
0.74,
0.88)
data
correlated
strongly
external
norms,
again
lowest
validity
0.41,
0.08).
summarize,
THINGSplus
provides
large-scale,
externally
validated
extension
existing
an
important
allowing
detailed
selection
stimuli
control
variables
wide
range
research
interested
processing,
language,
memory.
Proceedings of the National Academy of Sciences,
Год журнала:
2024,
Номер
121(17)
Опубликована: Апрель 18, 2024
Functional
neuroimaging
studies
indicate
that
the
human
brain
can
represent
concepts
and
their
relational
structure
in
memory
using
coding
schemes
typical
of
spatial
navigation.
However,
whether
we
read
out
internal
representational
geometries
conceptual
spaces
solely
from
behavior
remains
unclear.
Here,
report
between
might
be
reflected
spontaneous
eye
movements
during
verbal
fluency
tasks:
When
asked
participants
to
randomly
generate
numbers,
correlated
with
distances
along
left-to-right
one-dimensional
geometry
number
space
(mental
line),
while
they
scaled
distance
ring-like
two-dimensional
color
(color
wheel)
when
generated
names.
Moreover,
produced
animal
names,
low-dimensional
similarity
word
frequencies.
These
results
suggest
used
internally
organize
gaze
behavior.
Scientific Reports,
Год журнала:
2024,
Номер
14(1)
Опубликована: Июль 10, 2024
Large
Language
Models
(LLMs),
such
as
the
General
Pre-trained
Transformer
(GPT),
have
shown
remarkable
performance
in
various
cognitive
tasks.
However,
it
remains
unclear
whether
these
models
ability
to
accurately
infer
human
perceptual
representations.
Previous
research
has
addressed
this
question
by
quantifying
correlations
between
similarity
response
patterns
of
humans
and
LLMs.
Correlation
provides
a
measure
similarity,
but
relies
pre-defined
item
labels
does
not
distinguish
category-
item-
level
falling
short
characterizing
detailed
structural
correspondence
To
assess
their
equivalence
more
detail,
we
propose
use
an
unsupervised
alignment
method
based
on
Gromov-Wasserstein
optimal
transport
(GWOT).
GWOT
allows
for
comparison
structures
without
relying
label
correspondences
can
reveal
fine-grained
similarities
differences
that
may
be
detected
simple
correlation
analysis.
Using
large
dataset
judgments
93
colors,
compared
color
(color-neurotypical
color-atypical
participants)
two
GPT
(GPT-3.5
GPT-4).
Our
results
show
structure
color-neurotypical
participants
remarkably
well
aligned
with
GPT-4
and,
lesser
extent,
GPT-3.5.
These
contribute
methodological
advancements
comparing
LLMs
perception,
highlight
potential
methods
correspondences.
Nature Communications,
Год журнала:
2024,
Номер
15(1)
Опубликована: Июль 24, 2024
Abstract
Studying
the
neural
basis
of
human
dynamic
visual
perception
requires
extensive
experimental
data
to
evaluate
large
swathes
functionally
diverse
brain
networks
driven
by
perceiving
events.
Here,
we
introduce
BOLD
Moments
Dataset
(BMD),
a
repository
whole-brain
fMRI
responses
over
1000
short
(3
s)
naturalistic
video
clips
events
across
ten
subjects.
We
use
videos’
metadata
show
how
represents
word-
and
sentence-level
descriptions
identify
correlates
memorability
scores
extending
into
parietal
cortex.
Furthermore,
reveal
match
in
hierarchical
processing
between
cortical
regions
interest
video-computable
deep
networks,
showcase
that
BMD
successfully
captures
temporal
dynamics
at
second
resolution.
With
its
rich
metadata,
offers
new
perspectives
accelerates
research
on
event
perception.