Dynamic face-related eye movement representations in the human ventral pathway
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2025,
Номер
unknown
Опубликована: Янв. 13, 2025
Abstract
Multiple
brain
areas
along
the
ventral
pathway
have
been
known
to
represent
face
images.
Here,
in
a
magnetoencephalography
(MEG)
experiment,
we
show
dynamic
representations
of
face-related
eye
movements
absence
image
perception.
Participants
followed
dot
presented
on
uniform
background,
movement
which
represented
gaze
tracks
acquired
previously
during
their
free-viewing
and
house
pictures.
We
found
dominant
role
stream
representing
tracks,
starting
from
orbitofrontal
cortex
(OFC)
anterior
temporal
lobe
(ATL),
extending
medial
occipitotemporal
cortex.
Our
findings
that
represents
used
explore
faces,
by
top-down
prediction
category
OFC
ATL
may
guide,
via
or
directly,
perception
Язык: Английский
Origins of food selectivity in human visual cortex
Trends in Neurosciences,
Год журнала:
2025,
Номер
unknown
Опубликована: Фев. 1, 2025
Язык: Английский
Dynamic face-related eye movement representations in the human ventral pathway
Опубликована: Апрель 21, 2025
Abstract
Multiple
brain
areas
along
the
ventral
pathway
have
been
known
to
represent
face
images.
Here,
in
a
magnetoencephalography
(MEG)
experiment,
we
show
dynamic
representations
of
face-related
eye
movements
absence
image
perception.
Participants
followed
dot
presented
on
uniform
background,
movement
which
represented
gaze
tracks
acquired
previously
during
their
free-viewing
and
house
pictures.
We
found
dominant
role
stream
representing
tracks,
starting
from
orbitofrontal
cortex
(OFC)
anterior
temporal
lobe
(ATL),
extending
medial
occipitotemporal
cortex.
Our
findings
that
represents
used
explore
faces,
by
top-down
prediction
category
OFC
ATL
may
guide,
via
or
directly,
perception
Язык: Английский
Primary manipulation knowledge of objects is associated with the functional coupling of pMTG and aIPS
Neuropsychologia,
Год журнала:
2024,
Номер
205, С. 109034 - 109034
Опубликована: Ноя. 12, 2024
Correctly
using
hand-held
tools
and
manipulable
objects
typically
relies
not
only
on
sensory
motor-related
processes,
but
also
centrally
conceptual
knowledge
about
how
are
used
(e.g.
grasping
the
handle
of
a
kitchen
knife
rather
than
blade
avoids
injury).
A
wealth
fMRI
connectivity-related
evidence
demonstrates
that
contributions
from
both
ventral
dorsal
stream
areas
important
for
accurate
tool
use.
Here,
we
investigate
combined
role
in
representing
"primary"
manipulation
-
is,
is
hypothesized
to
be
central
importance
day-to-day
object
We
operationalize
primary
by
extracting
first
dimension
multi-dimensional
scaling
solution
over
behavioral
judgement
task
where
subjects
arranged
set
80
based
their
overall
similarity.
then
relate
this
representational
time-course
correlations
between
areas.
Our
results
show
functional
coupling
posterior
middle
temporal
gyrus
(pMTG)
anterior
intraparietal
sulcus
(aIPS)
uniquely
related
objects,
effect
more
pronounced
require
precision
grasping.
reason
due
precision-grasp
requiring
ventral/temporal
information
relating
shape,
material
function
allow
correct
finger
placement
controlled
manipulation.
These
demonstrate
across
these
service
grasp-related
behavior.
Язык: Английский
Object Representations Reflect Hierarchical Scene Structure and Depend on High-Level Visual, Semantic, and Action Information
Опубликована: Янв. 1, 2024
Язык: Английский
The organization of high-level visual cortex is aligned with visual rather than abstract linguistic information
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2024,
Номер
unknown
Опубликована: Ноя. 12, 2024
Recent
studies
show
that
linguistic
representations
predict
the
response
of
high-level
visual
cortex
to
images,
suggesting
an
alignment
between
and
information.
Here,
using
iEEG,
we
tested
hypothesis
such
is
limited
textual
descriptions
content
image
would
not
appear
for
their
abstract
descriptions.
We
generated
two
types
images
famous
people
places:
visual-text
,
describing
image,
abstract-text
based
on
Wikipedia
definitions,
extracted
relational-structure
from
a
large
language
model.
used
these
representations,
along
with
representation
deep
neural
network,
iEEG
responses
images.
Neural
relational-structures
in
were
similarly
predicted
by
visual-images
visual-text,
but
representations.
These
results
demonstrate
visual-language
visually
grounded
language.
Язык: Английский
Behaviorally-relevant features of observed actions dominate cortical representational geometry in natural vision
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2024,
Номер
unknown
Опубликована: Ноя. 26, 2024
Abstract
We
effortlessly
extract
behaviorally
relevant
information
from
dynamic
visual
input
in
order
to
understand
the
actions
of
others.
In
current
study,
we
develop
and
test
a
number
models
better
neural
representational
geometries
supporting
action
understanding.
Using
fMRI,
measured
brain
activity
as
participants
viewed
diverse
set
90
different
video
clips
depicting
social
nonsocial
real-world
contexts.
developed
five
behavioral
using
arrangement
tasks:
two
reflecting
judgments
purpose
(transitivity)
content
(sociality)
depicted
stimuli;
three
(people,
objects,
scene)
still
frames
stimuli.
evaluated
how
well
these
predict
geometry
tested
them
against
semantic
based
on
verb
nonverb
embeddings
gaze
motion
energy.
Our
results
revealed
that
similarity
reflect
than
or
throughout
much
cortex.
The
sociality
transitivity
particular
captured
large
portion
unique
variance
observation
network,
extending
into
regions
not
typically
associated
with
perception,
like
ventral
temporal
Overall,
our
findings
expand
network
indicate
observed
are
predominant
cortical
representation.
Язык: Английский
Dynamic representation of multidimensional object properties in the human brain
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2023,
Номер
unknown
Опубликована: Сен. 9, 2023
Abstract
Our
visual
world
consists
of
an
immense
number
unique
objects
and
yet,
we
are
easily
able
to
identify,
distinguish,
interact,
reason
about
the
things
see
within
a
few
hundred
milliseconds.
This
requires
that
integrate
focus
on
wide
array
object
properties
support
diverse
behavioral
goals.
In
current
study,
used
large-scale
comprehensively
sampled
stimulus
set
developed
analysis
approach
determine
if
could
capture
how
rich,
multidimensional
representations
unfold
over
time
in
human
brain.
We
modelled
time-resolved
MEG
signals
evoked
by
viewing
single
presentations
tens
thousands
images
based
millions
judgments.
Extracting
behavior-derived
dimensions
from
similarity
judgments,
data-driven
guide
our
understanding
neural
representation
space
found
every
dimension
is
reflected
signal.
Studying
temporal
profiles
for
different
courses
fell
into
two
broad
types,
with
either
distinct
early
peak
(∼125
ms)
or
slow
rise
late
(∼300
ms).
Further,
effects
were
stable
across
participants,
contrast
later
which
showed
more
variability,
suggesting
peaks
may
carry
stimulus-specific
participant-specific
information.
Dimensions
appeared
be
primarily
those
conceptual,
conceptual
variable
people.
Together,
these
data
provide
comprehensive
account
brain
form
basis
rich
nature
vision.
Язык: Английский