Journal of Neuroscience,
Journal Year:
2021,
Volume and Issue:
41(50), P. 10316 - 10329
Published: Nov. 3, 2021
When
listening
to
speech,
our
brain
responses
time
lock
acoustic
events
in
the
stimulus.
Recent
studies
have
also
reported
that
cortical
track
linguistic
representations
of
speech.
However,
tracking
these
is
often
described
without
controlling
for
properties.
Therefore,
response
might
reflect
unaccounted
processing
rather
than
language
processing.
Here,
we
evaluated
potential
several
recently
proposed
as
neural
markers
speech
comprehension.
To
do
so,
investigated
EEG
audiobook
29
participants
(22
females).
We
examined
whether
contribute
unique
information
over
and
beyond
each
other.
Indeed,
not
all
were
significantly
tracked
after
phoneme
surprisal,
cohort
entropy,
word
frequency
tested
generality
associated
by
training
on
one
story
testing
another.
In
general,
are
similarly
across
different
stories
spoken
readers.
These
results
suggests
characterize
content
speech.SIGNIFICANCE
STATEMENT
For
clinical
applications,
it
would
be
desirable
develop
a
marker
comprehension
derived
from
continuous
Such
measure
allow
behavior-free
evaluation
understanding;
this
open
doors
toward
better
quantification
understanding
populations
whom
obtaining
behavioral
measures
may
difficult,
such
young
children
or
people
with
cognitive
impairments,
targeted
interventions
fitting
hearing
devices.
Proceedings of the National Academy of Sciences,
Journal Year:
2022,
Volume and Issue:
119(32)
Published: Aug. 3, 2022
Understanding
spoken
language
requires
transforming
ambiguous
acoustic
streams
into
a
hierarchy
of
representations,
from
phonemes
to
meaning.
It
has
been
suggested
that
the
brain
uses
prediction
guide
interpretation
incoming
input.
However,
role
in
processing
remains
disputed,
with
disagreement
about
both
ubiquity
and
representational
nature
predictions.
Here,
we
address
issues
by
analyzing
recordings
participants
listening
audiobooks,
using
deep
neural
network
(GPT-2)
precisely
quantify
contextual
First,
establish
responses
words
are
modulated
ubiquitous
Next,
disentangle
model-based
predictions
distinct
dimensions,
revealing
dissociable
signatures
syntactic
category
(parts
speech),
phonemes,
semantics.
Finally,
show
high-level
(word)
inform
low-level
(phoneme)
predictions,
supporting
hierarchical
predictive
processing.
Together,
these
results
underscore
processing,
showing
spontaneously
predicts
upcoming
at
multiple
levels
abstraction.
Humans
engagement
in
music
rests
on
underlying
elements
such
as
the
listeners'
cultural
background
and
interest
music.
These
factors
modulate
how
listeners
anticipate
musical
events,
a
process
inducing
instantaneous
neural
responses
confronts
these
expectations.
Measuring
correlates
would
represent
direct
window
into
high-level
brain
processing.
Here
we
recorded
cortical
signals
participants
listened
to
Bach
melodies.
We
assessed
relative
contributions
of
acoustic
PLoS Biology,
Journal Year:
2020,
Volume and Issue:
18(10), P. e3000883 - e3000883
Published: Oct. 22, 2020
Humans
are
remarkably
skilled
at
listening
to
one
speaker
out
of
an
acoustic
mixture
several
speech
sources.
Two
speakers
easily
segregated,
even
without
binaural
cues,
but
the
neural
mechanisms
underlying
this
ability
not
well
understood.
One
possibility
is
that
early
cortical
processing
performs
a
spectrotemporal
decomposition
mixture,
allowing
attended
be
reconstructed
via
optimally
weighted
recombinations
discount
regions
where
sources
heavily
overlap.
Using
human
magnetoencephalography
(MEG)
responses
2-talker
we
show
evidence
for
alternative
possibility,
in
which
early,
active
segregation
occurs
strongly
spectrotemporally
overlapping
regions.
Early
(approximately
70-millisecond)
nonoverlapping
features
seen
both
talkers.
When
competing
talkers'
mask
each
other,
individual
representations
persist,
they
occur
with
approximately
20-millisecond
delay.
This
suggests
auditory
cortex
recovers
masked
if
occurred
ignored
speech.
The
existence
such
noise-robust
representations,
present
as
speech,
stream
process,
could
explain
range
behavioral
effects
background
NeuroImage,
Journal Year:
2021,
Volume and Issue:
247, P. 118698 - 118698
Published: Nov. 16, 2021
The
amplitude
envelope
of
speech
carries
crucial
low-frequency
acoustic
information
that
assists
linguistic
decoding
at
multiple
time
scales.
Neurophysiological
signals
are
known
to
track
the
adult-directed
(ADS),
particularly
in
theta-band.
Acoustic
analysis
infant-directed
(IDS)
has
revealed
significantly
greater
modulation
energy
than
ADS
an
amplitude-modulation
(AM)
band
centred
on
∼2
Hz.
Accordingly,
cortical
tracking
IDS
by
delta-band
neural
may
be
key
language
acquisition.
Speech
also
contains
within
its
higher-frequency
bands
(beta,
gamma).
Adult
EEG
and
MEG
studies
reveal
oscillatory
hierarchy,
whereby
(delta,
theta)
phase
dynamics
temporally
organize
high-frequency
(phase
coupling,
PAC).
Whilst
consensus
is
growing
around
role
PAC
matured
adult
brain,
development
processing
unexplored.
Here,
we
examined
presence
maturation
(<12
Hz)
infants
recording
longitudinally
from
60
participants
when
aged
4-,
7-
11-
months
as
they
listened
nursery
rhymes.
After
establishing
stimulus-related
delta
theta,
each
age
was
assessed
delta,
theta
alpha
[control]
using
a
multivariate
temporal
response
function
(mTRF)
method.
Delta-beta,
delta-gamma,
theta-beta
theta-gamma
phase-amplitude
coupling
(PAC)
assessed.
Significant
but
not
found.
present
all
ages,
with
both
-driven
observed.
Sensors,
Journal Year:
2021,
Volume and Issue:
21(5), P. 1867 - 1867
Published: March 7, 2021
Blood
pressure
(BP)
monitoring
has
significant
importance
in
the
treatment
of
hypertension
and
different
cardiovascular
health
diseases.
As
photoplethysmogram
(PPG)
signals
can
be
recorded
non-invasively,
research
been
highly
conducted
to
measure
BP
using
PPG
recently.
In
this
paper,
we
propose
a
U-net
deep
learning
architecture
that
uses
fingertip
signal
as
input
estimate
arterial
(ABP)
waveform
non-invasively.
From
waveform,
have
also
measured
systolic
(SBP),
diastolic
(DBP),
mean
(MAP).
The
proposed
method
was
evaluated
on
subset
100
subjects
from
two
publicly
available
databases:
MIMIC
MIMIC-III.
predicted
ABP
waveforms
correlated
with
reference
obtained
an
average
Pearson's
correlation
coefficient
0.993.
absolute
error
is
3.68
±
4.42
mmHg
for
SBP,
1.97
2.92
DBP,
2.17
3.06
MAP
which
satisfy
requirements
Association
Advancement
Medical
Instrumentation
(AAMI)
standard
obtain
grade
A
according
British
Hypertension
Society
(BHS)
standard.
results
show
efficient
process
directly
PPG.