Negation
is
the
fundamental
component
in
a
natural
language
that
reverses
semantic
meaning
of
sentence.
It
plays
an
extremely
important
role
across
wide
range
applications,
yet
they
are
underrepresented
pre-trained
models
(LMs),
resulting
often
wrong
inferences.
In
this
work,
we
try
to
improve
underlying
understanding
negation
LMs.
To
augment
understanding,
propose
model
objective
with
weighted
cross-entropy
loss
and
elastic
weight
consolidation
regularization.
We
reduce
mean
top
1
error
rate
for
BERT-base
1.1%,
BERT-large
0.78%,
RoBERTA-base
3.74%,
RoBERTA-large
0.01%
on
negated
LAMA
dataset.
minimizes
BERT
by
margin
8%
also
outperform
existing
models.
provide
empirical
evidences
augmented
classical
original
as
well
benchmarks
inference
tasks.
As
businesses,
products,
and
services
spring
up
around
large
language
models,
the
trustworthiness
of
these
models
hinges
on
verifiability
their
outputs.
However,
methods
for
explaining
model
outputs
largely
fall
across
two
distinct
fields
study
which
both
use
term
"attribution"
to
refer
entirely
separate
techniques:
citation
generation
training
data
attribution.
In
many
modern
applications,
such
as
legal
document
medical
question
answering,
types
attributions
are
important.
this
work,
we
argue
present
a
unified
framework
attributions.
We
show
how
existing
different
attribution
under
framework.
also
discuss
real-world
cases
where
one
or
required.
believe
that
will
guide
case
driven
development
systems
leverage
attribution,
well
standardization
evaluation.
Pretrained
language
models
(PLMs)
are
key
components
in
NLP,
but
they
contain
strong
social
biases.
Quantifying
these
biases
is
challenging
because
current
methods
focusing
on
fill-the-mask
objectives
sensitive
to
slight
changes
input.
To
address
this,
we
propose
a
bias
probing
technique
called
LABDet,
for
evaluating
PLMs
with
robust
and
language-agnostic
method.
For
nationality
as
case
study,
show
that
LABDet
“surfaces”
by
training
classifier
top
of
frozen
PLM
non-nationality
sentiment
detection.
We
find
consistent
patterns
across
monolingual
six
languages
align
historical
political
context.
also
English
BERT
surfaced
correlates
well
the
pretraining
data;
thus,
our
work
one
few
studies
directly
links
data
behavior.
Finally,
verify
LABDet’s
reliability
applicability
different
templates
through
an
extensive
set
robustness
checks.
publicly
share
code
dataset
https://github.com/akoksal/LABDet.
Christopher
Akiki,
Odunayo
Ogundepo,
Aleksandra
Piktus,
Xinyu
Zhang,
Akintunde
Oladipo,
Jimmy
Lin,
Martin
Potthast.
Proceedings
of
the
2023
Conference
on
Empirical
Methods
in
Natural
Language
Processing:
System
Demonstrations.
2023.
Transformers
have
a
quadratic
scaling
of
computational
complexity
with
input
size,
which
limits
the
context
window
size
large
language
models
(LLMs)
in
both
training
and
inference.
Meanwhile,
retrieval-augmented
generation
(RAG)
besed
can
better
handle
longer
contexts
by
using
retrieval
system
to
filter
out
unnecessary
information.
However,
most
RAG
methods
only
perform
based
on
initial
query,
may
not
work
well
complex
questions
that
require
deeper
reasoning.
We
introduce
novel
approach,
Inner
Loop
Memory
Augmented
Tree
Retrieval
(ILM-TR),
involving
inner-loop
queries,
query
question
itself
but
also
intermediate
findings.
At
inference
time,
our
model
retrieves
information
from
system,
integrating
data
lengthy
documents
at
various
levels
abstraction.
Based
retrieved,
LLM
generates
texts
stored
an
area
named
Short-Term
(STM)
is
then
used
formulate
next
query.
This
process
repeated
until
text
STM
converged.
Our
experiments
demonstrate
offers
improvements
over
traditional
LLMs,
particularly
long
tests
such
as
Multi-Needle
In
A
Haystack
(M-NIAH)
BABILong.
Applied Sciences,
Journal Year:
2023,
Volume and Issue:
13(8), P. 4946 - 4946
Published: April 14, 2023
As
the
Internet
of
Things
devices
are
deployed
on
a
large
scale,
location-based
services
being
increasingly
utilized.
Among
these
services,
kNN
(k-nearest
neighbor)
queries
based
road
network
constraints
have
gained
importance.
This
study
focuses
CkNN
(continuous
k-nearest
for
non-uniformly
distributed
moving
objects
with
large-scale
dynamic
constraints,
where
continuously
and
periodically
queried
their
motion
evolution.
The
present
high-concurrency
query
under
super-large
faces
problems,
such
as
high
computational
cost
low
efficiency.
aim
this
is
to
ensure
concurrency
nearest
neighbor
requests
while
shortening
response
time
reducing
global
computation
costs.
To
address
issue,
we
propose
DVTG-Index
(Dynamic
V-Tree
Double-Layer
Grid
Index),
which
intelligently
adjusts
index
granularity
by
merging
splitting
subgraphs
move,
thereby
filtering
unnecessary
vertices.
Based
DVTG-Index,
further
DVTG-CkNN
algorithm
calculate
initial
utilize
existing
results
speed
up
query.
Finally,
extensive
experiments
real
networks
confirm
superior
performance
our
proposed
method,
has
significant
practical
applications
in
objects.