American Journal of Speech-Language Pathology,
Journal Year:
2024,
Volume and Issue:
unknown, P. 1 - 11
Published: July 11, 2024
Purpose:
The
Western
Aphasia
Battery
is
widely
used
to
assess
people
with
aphasia
(PWA).
Sequential
Commands
(SC)
one
of
the
most
challenging
subtests
for
PWA.
However,
test
items
confound
linguistic
factors
that
make
sentences
difficult
current
study
systematically
manipulated
semantic
plausibility
and
word
order
in
like
those
SC
examine
how
these
affect
comprehension
deficits
aphasia.
Method:
Fifty
Korean
speakers
(25
PWA
25
controls)
completed
a
sentence–picture
matching
task
(canonical
vs.
noncanonical)
(plausible
less
plausible).
Analyses
focused
on
accuracy
aimed
identify
sentence
types
best
discriminate
groups.
Additionally,
we
explored
which
type
serves
as
predictor
severity.
Results:
demonstrated
greater
difficulties
processing
plausible
than
ones
compared
controls.
Across
groups,
noncanonical
elicited
lower
canonical
sentences.
Notably,
control
groups
differed
severity
significantly
correlated
Conclusion:
Even
languages
flexible
order,
find
it
process
syntactic
structures
roles.
Transactions of the Association for Computational Linguistics,
Journal Year:
2023,
Volume and Issue:
11, P. 336 - 350
Published: Jan. 1, 2023
Abstract
This
work
presents
a
linguistic
analysis
into
why
larger
Transformer-based
pre-trained
language
models
with
more
parameters
and
lower
perplexity
nonetheless
yield
surprisal
estimates
that
are
less
predictive
of
human
reading
times.
First,
regression
analyses
show
strictly
monotonic,
positive
log-linear
relationship
between
fit
to
times
for
the
recently
released
five
GPT-Neo
variants
eight
OPT
on
two
separate
datasets,
replicating
earlier
results
limited
just
GPT-2
(Oh
et
al.,
2022).
Subsequently,
residual
errors
reveals
systematic
deviation
variants,
such
as
underpredicting
named
entities
making
compensatory
overpredictions
function
words
modals
conjunctions.
These
suggest
propensity
‘memorize’
sequences
during
training
makes
their
diverge
from
humanlike
expectations,
which
warrants
caution
in
using
study
processing.
Proceedings of the National Academy of Sciences,
Journal Year:
2024,
Volume and Issue:
121(10)
Published: Feb. 29, 2024
During
real-time
language
comprehension,
our
minds
rapidly
decode
complex
meanings
from
sequences
of
words.
The
difficulty
doing
so
is
known
to
be
related
words’
contextual
predictability,
but
what
cognitive
processes
do
these
predictability
effects
reflect?
In
one
view,
reflect
facilitation
due
anticipatory
processing
words
that
are
predictable
context.
This
view
predicts
a
linear
effect
on
demand.
another
the
costs
probabilistic
inference
over
sentence
interpretations.
either
logarithmic
or
superlogarithmic
demand,
depending
whether
it
assumes
pressures
toward
uniform
distribution
information
time.
empirical
record
currently
mixed.
Here,
we
revisit
this
question
at
scale:
We
analyze
six
reading
datasets,
estimate
next-word
probabilities
with
diverse
statistical
models,
and
model
times
using
recent
advances
in
nonlinear
regression.
Results
support
word
difficulty,
which
favors
as
key
component
human
processing.
Trends in Cognitive Sciences,
Journal Year:
2023,
Volume and Issue:
27(11), P. 1032 - 1052
Published: Sept. 11, 2023
Prediction
is
often
regarded
as
an
integral
aspect
of
incremental
language
comprehension,
but
little
known
about
the
cognitive
architectures
and
mechanisms
that
support
it.
We
review
studies
showing
listeners
readers
use
all
manner
contextual
information
to
generate
multifaceted
predictions
upcoming
input.
The
nature
these
may
vary
between
individuals
owing
differences
in
experience,
among
other
factors.
then
turn
unresolved
questions
which
guide
search
for
underlying
mechanisms.
(i)
Is
prediction
essential
processing
or
optional
strategy?
(ii)
Are
generated
from
within
system
by
domain-general
processes?
(iii)
What
relationship
memory?
(iv)
Does
comprehension
require
simulation
via
production
system?
discuss
promising
directions
making
progress
answering
developing
a
mechanistic
understanding
language.
During
real-time
language
comprehension,
our
minds
rapidly
decode
complex
meanings
from
sequences
of
words.
The
difficulty
doing
so
is
known
to
be
related
words'
contextual
predictability,
but
what
cognitive
processes
do
these
predictability
effects
reflect?
In
one
view,
reflect
facilitation
due
anticipatory
processing
words
that
are
predictable
context.
This
view
predicts
a
linear
effect
on
demand.
another
the
costs
probabilistic
inference
over
sentence
interpretations.
either
logarithmic
or
superlogarithmic
demand,
depending
whether
it
assumes
pressures
toward
uniform
distribution
information
time.
empirical
record
currently
mixed.
Here
we
revisit
this
question
at
scale:
analyze
six
reading
datasets,
estimate
next-word
probabilities
with
diverse
statistical
models,
and
model
times
using
recent
advances
in
nonlinear
regression.
Results
support
word
difficulty,
which
favors
as
key
component
human
processing.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing,
Journal Year:
2023,
Volume and Issue:
unknown
Published: Jan. 1, 2023
Prompting
is
now
a
dominant
method
for
evaluating
the
linguistic
knowledge
of
large
language
models
(LLMs).
While
other
methods
directly
read
out
models'
probability
distributions
over
strings,
prompting
requires
to
access
this
internal
information
by
processing
input,
thereby
implicitly
testing
new
type
emergent
ability:
metalinguistic
judgment.
In
study,
we
compare
and
direct
measurements
as
ways
measuring
knowledge.
Broadly,
find
that
LLMs'
judgments
are
inferior
quantities
derived
from
representations.
Furthermore,
consistency
gets
worse
prompt
query
diverges
next-word
probabilities.
Our
findings
suggest
negative
results
relying
on
prompts
cannot
be
taken
conclusive
evidence
an
LLM
lacks
particular
generalization.
also
highlight
value
lost
with
move
closed
APIs
where
limited.
Neurobiology of Language,
Journal Year:
2023,
Volume and Issue:
5(1), P. 7 - 42
Published: July 18, 2023
Representations
from
artificial
neural
network
(ANN)
language
models
have
been
shown
to
predict
human
brain
activity
in
the
network.
To
understand
what
aspects
of
linguistic
stimuli
contribute
ANN-to-brain
similarity,
we
used
an
fMRI
data
set
responses
Computational Linguistics,
Journal Year:
2024,
Volume and Issue:
unknown, P. 1 - 36
Published: July 30, 2024
Abstract
How
should
we
compare
the
capabilities
of
language
models
(LMs)
and
humans?
In
this
article,
I
draw
inspiration
from
comparative
psychology
to
highlight
challenges
in
these
comparisons.
focus
on
a
case
study:
processing
recursively
nested
grammatical
structures.
Prior
work
suggests
that
LMs
cannot
process
structures
as
reliably
humans
can.
However,
were
provided
with
instructions
substantial
training,
while
evaluated
zero-shot.
therefore
match
evaluation
more
closely.
Providing
large
simple
prompt—with
substantially
less
content
than
human
training—allows
consistently
outperform
results,
even
deeply
conditions
tested
humans.
Furthermore,
effects
prompting
are
robust
particular
vocabulary
used
prompt.
Finally,
reanalyzing
existing
data
may
not
perform
above
chance
at
difficult
initially.
Thus,
indeed
humans,
when
comparably.
This
study
highlights
how
discrepancies
methods
can
confound
comparisons
conclude
by
reflecting
broader
challenge
comparing
model
capabilities,
an
important
difference
between
evaluating
cognitive
foundation
models.