PLoS Computational Biology,
Journal Year:
2024,
Volume and Issue:
20(3), P. e1011978 - e1011978
Published: March 22, 2024
People
often
have
to
switch
back
and
forth
between
different
environments
that
come
with
problems
volatilities.
While
volatile
require
fast
learning
(i.e.,
high
rates),
stable
call
for
lower
rates.
Previous
studies
shown
people
adapt
their
rates,
but
it
remains
unclear
whether
they
can
also
learn
about
environment-specific
instantaneously
retrieve
them
when
revisiting
environments.
Here,
using
optimality
simulations
hierarchical
Bayesian
analyses
across
three
experiments,
we
show
use
rates
switching
two
We
even
observe
a
signature
of
these
the
volatility
both
is
suddenly
same.
conclude
humans
flexibly
associate
environments,
offering
important
insights
developing
theories
meta-learning
context-specific
control.
Nature Neuroscience,
Journal Year:
2023,
Volume and Issue:
26(8), P. 1438 - 1448
Published: July 20, 2023
Abstract
Memorization
and
generalization
are
complementary
cognitive
processes
that
jointly
promote
adaptive
behavior.
For
example,
animals
should
memorize
safe
routes
to
specific
water
sources
generalize
from
these
memories
discover
environmental
features
predict
new
ones.
These
functions
depend
on
systems
consolidation
mechanisms
construct
neocortical
memory
traces
hippocampal
precursors,
but
why
only
applies
a
subset
of
is
unclear.
Here
we
introduce
neural
network
formalization
reveals
an
overlooked
tension—unregulated
transfer
can
cause
overfitting
harm
in
unpredictable
world.
We
resolve
this
tension
by
postulating
consolidate
when
it
aids
generalization.
This
framework
accounts
for
partial
hippocampal–cortical
provides
normative
principle
reconceptualizing
numerous
observations
the
field.
Generalization-optimized
thus
insight
into
how
behavior
benefits
learning
specialized
memorization
ACM Computing Surveys,
Journal Year:
2024,
Volume and Issue:
56(12), P. 1 - 41
Published: May 3, 2024
Despite
its
astounding
success
in
learning
deeper
multi-dimensional
data,
the
performance
of
deep
declines
on
new
unseen
tasks
mainly
due
to
focus
same-distribution
prediction.
Moreover,
is
notorious
for
poor
generalization
from
few
samples.
Meta-learning
a
promising
approach
that
addresses
these
issues
by
adapting
with
few-shot
datasets.
This
survey
first
briefly
introduces
meta-learning
and
then
investigates
state-of-the-art
methods
recent
advances
in:
(i)
metric-based,
(ii)
memory-based,
(iii),
learning-based
methods.
Finally,
current
challenges
insights
future
researches
are
discussed.
IEEE Access,
Journal Year:
2023,
Volume and Issue:
11, P. 11880 - 11902
Published: Jan. 1, 2023
Next-generation
wireless
communication
networks
will
benefit
from
beamforming
gain
to
utilize
higher
bandwidths
at
millimeter
wave
(mmWave)
and
terahertz
(THz)
bands.
For
high
directional
gain,
a
beam
management
(BM)
framework
acquires
tracks
optimal
downlink
uplink
pairs
through
exhaustive
scan.
However,
for
narrower
beams
carrier
frequencies
this
leads
huge
measurement
overhead
that
negatively
impacts
the
acquisition
tracking.
Moreover,
volatility
of
mmWave
THz
channels,
user
random
mobility
patterns,
environmental
changes
further
complicate
BM
process.
Consequently,
machine
learning
(ML)
algorithms
can
identify
learn
complex
patterns
track
dynamics
have
been
identified
as
remedy.
In
article,
we
provide
an
overview
existing
ML-based
mmWave/THz
tracking
techniques.
Especially,
highlight
key
characteristics
framework.
By
surveying
recent
studies,
some
open
research
challenges
our
recommendations
serve
future
direction
researchers
in
area.
Proceedings of the IEEE,
Journal Year:
2023,
Volume and Issue:
111(6), P. 623 - 652
Published: June 1, 2023
While
Moore's
law
has
driven
exponential
computing
power
expectations,
its
nearing
end
calls
for
new
avenues
improving
the
overall
system
performance.
One
of
these
is
exploration
alternative
brain-inspired
architectures
that
aim
at
achieving
flexibility
and
computational
efficiency
biological
neural
processing
systems.
Within
this
context,
neuromorphic
engineering
represents
a
paradigm
shift
in
based
on
implementation
spiking
network
which
memory
are
tightly
co-located.
In
paper,
we
provide
comprehensive
overview
field,
highlighting
different
levels
granularity
realized
comparing
design
approaches
focus
replicating
natural
intelligence
(bottom-up)
versus
those
solving
practical
artificial
applications
(top-down).
First,
present
analog,
mixed-signal
digital
circuit
styles,
identifying
boundary
between
through
time
multiplexing,
in-memory
computation,
novel
devices.
Then,
highlight
key
tradeoffs
each
bottom-up
top-down
approaches,
survey
their
silicon
implementations,
carry
out
detailed
comparative
analyses
to
extract
guidelines.
Finally,
identify
necessary
synergies
missing
elements
required
achieve
competitive
advantage
systems
over
conventional
machine-learning
accelerators
edge
applications,
outline
ingredients
framework
toward
intelligence.
Transfer
learning,
the
re-application
of
previously
learned
higher-level
regularities
to
novel
input,
is
a
key
challenge
in
cognition.
While
previous
empirical
studies
investigated
human
transfer
learning
supervised
or
reinforcement
for
explicit
knowledge,
it
unknown
whether
such
occurs
during
naturally
more
common
implicit
and
unsupervised
and,
if
so,
how
related
memory
consolidation.
We
compared
newly
acquired
abstract
knowledge
by
extending
visual
statistical
paradigm
context.
found
but
with
important
differences
depending
on
explicitness/implicitness
knowledge.
Observers
acquiring
initial
could
structures
immediately.
In
contrast,
observers
same
amount
showed
opposite
effect,
structural
interference
transfer.
However,
sleep
between
phases,
observers,
while
still
remaining
implicit,
switched
their
behaviour
pattern
as
did.
This
effect
was
specific
not
after
non-sleep
Our
results
highlight
similarities
generalizable
relying
consolidation
restructuring
internal
representations.