bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2023,
Volume and Issue:
unknown
Published: Sept. 9, 2023
Abstract
Our
visual
world
consists
of
an
immense
number
unique
objects
and
yet,
we
are
easily
able
to
identify,
distinguish,
interact,
reason
about
the
things
see
within
a
few
hundred
milliseconds.
This
requires
that
integrate
focus
on
wide
array
object
properties
support
diverse
behavioral
goals.
In
current
study,
used
large-scale
comprehensively
sampled
stimulus
set
developed
analysis
approach
determine
if
could
capture
how
rich,
multidimensional
representations
unfold
over
time
in
human
brain.
We
modelled
time-resolved
MEG
signals
evoked
by
viewing
single
presentations
tens
thousands
images
based
millions
judgments.
Extracting
behavior-derived
dimensions
from
similarity
judgments,
data-driven
guide
our
understanding
neural
representation
space
found
every
dimension
is
reflected
signal.
Studying
temporal
profiles
for
different
courses
fell
into
two
broad
types,
with
either
distinct
early
peak
(∼125
ms)
or
slow
rise
late
(∼300
ms).
Further,
effects
were
stable
across
participants,
contrast
later
which
showed
more
variability,
suggesting
peaks
may
carry
stimulus-specific
participant-specific
information.
Dimensions
appeared
be
primarily
those
conceptual,
conceptual
variable
people.
Together,
these
data
provide
comprehensive
account
brain
form
basis
rich
nature
vision.
Understanding
object
representations
requires
a
broad,
comprehensive
sampling
of
the
objects
in
our
visual
world
with
dense
measurements
brain
activity
and
behavior.
Here,
we
present
THINGS-data,
multimodal
collection
large-scale
neuroimaging
behavioral
datasets
humans,
comprising
densely
sampled
functional
MRI
magnetoencephalographic
recordings,
as
well
4.70
million
similarity
judgments
response
to
thousands
photographic
images
for
up
1,854
concepts.
THINGS-data
is
unique
its
breadth
richly
annotated
objects,
allowing
testing
countless
hypotheses
at
scale
while
assessing
reproducibility
previous
findings.
Beyond
insights
promised
by
each
individual
dataset,
multimodality
allows
combining
much
broader
view
into
processing
than
previously
possible.
Our
analyses
demonstrate
high
quality
provide
five
examples
hypothesis-driven
data-driven
applications.
constitutes
core
public
release
THINGS
initiative
(https://things-initiative.org)
bridging
gap
between
disciplines
advancement
cognitive
neuroscience.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
Journal Year:
2023,
Volume and Issue:
45(9), P. 10760 - 10777
Published: March 30, 2023
Decoding
human
visual
neural
representations
is
a
challenging
task
with
great
scientific
significance
in
revealing
vision-processing
mechanisms
and
developing
brain-like
intelligent
machines.
Most
existing
methods
are
difficult
to
generalize
novel
categories
that
have
no
corresponding
data
for
training.
The
two
main
reasons
1)
the
under-exploitation
of
multimodal
semantic
knowledge
underlying
2)
small
number
paired
(stimuli-responses)
training
data.
To
overcome
these
limitations,
this
paper
presents
generic
decoding
method
called
BraVL
uses
learning
brain-visual-linguistic
features.
We
focus
on
modeling
relationships
between
brain,
linguistic
features
via
deep
generative
models.
Specifically,
we
leverage
mixture-of-product-of-experts
formulation
infer
latent
code
enables
coherent
joint
generation
all
three
modalities.
learn
more
consistent
representation
improve
efficiency
case
limited
brain
activity
data,
exploit
both
intra-
inter-modality
mutual
information
maximization
regularization
terms.
In
particular,
our
model
can
be
trained
under
various
semi-supervised
scenarios
incorporate
textual
obtained
from
extra
categories.
Finally,
construct
trimodal
matching
datasets,
extensive
experiments
lead
some
interesting
conclusions
cognitive
insights:
practically
possible
good
accuracy;
models
using
combination
perform
much
better
than
those
either
them
alone;
3)
perception
may
accompanied
by
influences
represent
semantics
stimuli.
Annual Review of Vision Science,
Journal Year:
2023,
Volume and Issue:
9(1), P. 313 - 335
Published: March 8, 2023
Patterns
of
brain
activity
contain
meaningful
information
about
the
perceived
world.
Recent
decades
have
welcomed
a
new
era
in
neural
analyses,
with
computational
techniques
from
machine
learning
applied
to
data
decode
represented
brain.
In
this
article,
we
review
how
decoding
approaches
advanced
our
understanding
visual
representations
and
discuss
efforts
characterize
both
complexity
behavioral
relevance
these
representations.
We
outline
current
consensus
regarding
spatiotemporal
structure
recent
findings
that
suggest
are
at
once
robust
perturbations,
yet
sensitive
different
mental
states.
Beyond
physical
world,
work
has
shone
light
on
instantiates
internally
generated
states,
for
example,
during
imagery
prediction.
Going
forward,
remarkable
potential
assess
functional
human
behavior,
reveal
change
across
development
aging,
uncover
their
presentation
various
disorders.
Scientific Data,
Journal Year:
2024,
Volume and Issue:
11(1)
Published: May 29, 2024
Abstract
An
Electroencephalography
(EEG)
dataset
utilizing
rich
text
stimuli
can
advance
the
understanding
of
how
brain
encodes
semantic
information
and
contribute
to
decoding
in
brain-computer
interface
(BCI).
Addressing
scarcity
EEG
datasets
featuring
Chinese
linguistic
stimuli,
we
present
ChineseEEG
dataset,
a
high-density
complemented
by
simultaneous
eye-tracking
recordings.
This
was
compiled
while
10
participants
silently
read
approximately
13
hours
from
two
well-known
novels.
provides
long-duration
recordings,
along
with
pre-processed
sensor-level
data
embeddings
reading
materials
extracted
pre-trained
natural
language
processing
(NLP)
model.
As
pilot
derived
significantly
support
research
across
neuroscience,
NLP,
linguistics.
It
establishes
benchmark
for
decoding,
aids
development
BCIs,
facilitates
exploration
alignment
between
large
models
human
cognitive
processes.
also
aid
into
brain’s
mechanisms
within
context
language.