Remarkably,
human
brains
have
the
ability
to
accurately
perceive
and
process
real-world
size
of
objects,
despite
vast
differences
in
distance
perspective.
While
previous
studies
delved
into
this
phenomenon,
distinguishing
from
other
visual
perceptions,
like
depth,
has
been
challenging.
Using
THINGS
EEG2
dataset
with
high
time-resolution
brain
recordings
more
ecologically
valid
naturalistic
stimuli,
our
study
uses
an
innovative
approach
disentangle
neural
representations
object
retinal
perceived
depth
a
way
that
was
not
previously
possible.
Leveraging
state-of-the-art
dataset,
EEG
representational
similarity
results
reveal
pure
representation
brains.
We
report
timeline
processing:
appeared
first,
then
size,
finally,
size.
Additionally,
we
input
both
these
images
object-only
without
natural
background
artificial
networks.
Consistent
findings,
also
successfully
disentangled
all
three
types
networks
(visual-only
ResNet,
visual-language
CLIP,
language-only
Word2Vec).
Moreover,
multi-modal
comparison
framework
across
reveals
as
stable
higher-level
dimension
space
incorporating
semantic
information.
Our
research
provides
detailed
clear
characterization
processing
process,
which
offers
further
advances
insights
understanding
construction
brain-like
models.
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2023,
Номер
unknown
Опубликована: Авг. 18, 2023
1.
Abstract
Scene
recognition
is
a
core
sensory
capacity
that
enables
humans
to
adaptively
interact
with
their
environment.
Despite
substantial
progress
in
the
understanding
of
neural
representations
underlying
scene
recognition,
relevance
these
for
behavior
given
varying
task
demands
remains
unknown.
To
address
this,
we
aimed
identify
behaviorally
relevant
representations,
characterize
them
terms
visual
features,
and
reveal
how
they
vary
across
different
tasks.
We
recorded
fMRI
data
while
human
participants
viewed
scenes
linked
brain
responses
three
tasks
acquired
separate
sessions:
manmade/natural
categorization,
basic-level
fixation
color
discrimination.
found
correlations
between
categorization
response
times
scene-specific
responses,
quantified
as
distance
hyperplane
derived
from
multivariate
classifier.
Across
tasks,
effects
were
largely
distinct
parts
ventral
stream.
This
suggests
are
depending
on
task.
Next,
using
deep
networks
proxy
feature
early/intermediate
layers
mediated
relationship
both
indicating
contribution
low-/mid-level
features
representations.
Finally,
observed
opposite
patterns
brain-behavior
task,
interference
do
not
align
content
Together,
results
spatial
extent,
content,
task-dependence
mediate
complex
scenes.
Electrical
stimulation
of
the
visual
nervous
system
could
improve
quality
life
patients
affected
by
acquired
blindness
restoring
some
sensations,
but
requires
careful
optimization
parameters
to
produce
useful
perceptions.
Neural
correlates
elicited
perceptions
be
used
for
fast
automatic
optimization,
with
electroencephalography
as
a
natural
choice
it
can
non-invasively.
Nonetheless,
its
low
signal-to-noise
ratio
may
hinder
discrimination
similar
patterns,
preventing
use
in
electrical
stimulation.
Our
work
investigates
first
time
discriminability
electroencephalographic
responses
stimuli
compatible
stimulation,
employing
newly
dataset
whose
encompass
concurrent
variation
several
features,
while
neuroscience
research
tends
study
neural
single
features.
We
then
performed
above-chance
single-trial
decoding
multiple
features
our
crafted
using
relatively
simple
machine
learning
algorithms.
A
scheme
information
from
stimulus
presentations
was
implemented,
substantially
improving
performance,
suggesting
that
such
methods
should
systematically
future
applications.
The
significance
present
relies
determination
which
decoded
stimulation-compatible
and
at
granularity
they
discriminated.
pave
way
optimize
parameters,
thus
increasing
effectiveness
current
neuroprostheses.
NeuroImage,
Год журнала:
2024,
Номер
293, С. 120626 - 120626
Опубликована: Апрель 25, 2024
Spatio-temporal
patterns
of
evoked
brain
activity
contain
information
that
can
be
used
to
decode
and
categorize
the
semantic
content
visual
stimuli.
However,
this
procedure
biased
by
low-level
image
features
independently
present
in
stimuli,
prompting
need
understand
robustness
different
models
regarding
these
confounding
factors.
In
study,
we
trained
machine
learning
distinguish
between
concepts
included
publicly
available
THINGS-EEG
dataset
using
electroencephalography
(EEG)
data
acquired
during
a
rapid
serial
presentation
paradigm.
We
investigated
contribution
decoding
accuracy
multivariate
model,
utilizing
broadband
from
all
EEG
channels.
Additionally,
explored
univariate
model
obtained
through
data-driven
feature
selection
applied
spatial
frequency
domains.
While
exhibited
better
accuracy,
their
predictions
were
less
robust
effect
statistics.
Notably,
some
maintained
even
after
random
replacement
training
with
semantically
unrelated
samples
presented
similar
content.
conclusion,
our
findings
suggest
optimization
impacts
sensitivity
factors,
regardless
resulting
classification
performance.
Therefore,
choice
for
should
ideally
informed
criteria
beyond
classifier
performance,
such
as
neurobiological
mechanisms
under
study.
Abstract
This
easy‐to‐follow
handbook
offers
a
straightforward
guide
to
electroencephalogram
(EEG)
analysis
using
Python,
aimed
at
all
EEG
researchers
in
cognitive
neuroscience
and
related
fields.
It
spans
from
single‐subject
data
preprocessing
advanced
multisubject
analyses.
contains
four
chapters:
Preprocessing
Single‐Subject
Data,
Basic
Python
Data
Operations,
Multiple‐Subject
Analysis,
Advanced
Analysis.
The
chapter
provides
standardized
procedure
for
preprocessing,
primarily
the
MNE‐Python
package.
Operations
introduces
essential
operations
handling,
including
reading,
storage,
statistical
analysis.
Analysis
guides
readers
on
performing
event‐related
potential
time‐frequency
analyses
visualizing
outcomes
through
examples
face
perception
task
dataset.
explores
three
methodologies,
Classification‐based
decoding,
Representational
Similarity
Inverted
Encoding
Model,
practical
visual
working
memory
dataset
NeuroRA
other
powerful
packages.
We
designed
our
easy
comprehension
be
an
tool
anyone
delving
into
with
(GitHub
website:
https://github.com/ZitongLu1996/Python‐EEG‐Handbook
;
For
Chinese
version:
https://github.com/ZitongLu1996/Python‐EEG‐Handbook‐CN
).
Remarkably,
human
brains
have
the
ability
to
accurately
perceive
and
process
real-world
size
of
objects,
despite
vast
differences
in
distance
perspective.
While
previous
studies
delved
into
this
phenomenon,
distinguishing
from
other
visual
perceptions,
like
depth,
has
been
challenging.
Using
THINGS
EEG2
dataset
with
high
time-resolution
brain
recordings
more
ecologically
valid
naturalistic
stimuli,
our
study
uses
an
innovative
approach
disentangle
neural
representations
object
retinal
perceived
depth
a
way
that
was
not
previously
possible.
Leveraging
state-of-the-art
dataset,
EEG
representational
similarity
results
reveal
pure
representation
brains.
We
report
timeline
processing:
appeared
first,
then
size,
finally,
size.
Additionally,
we
input
both
these
images
object-only
without
natural
background
artificial
networks.
Consistent
findings,
also
successfully
disentangled
all
three
types
networks
(visual-only
ResNet,
visual-language
CLIP,
language-only
Word2Vec).
Moreover,
multi-modal
comparison
framework
across
reveals
as
stable
higher-level
dimension
space
incorporating
semantic
information.
Our
research
provides
detailed
clear
characterization
processing
process,
which
offers
further
advances
insights
understanding
construction
brain-like
models.