bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2024,
Номер
unknown
Опубликована: Окт. 27, 2024
Abstract
Prior
knowledge
accelerates
subsequent
learning
of
similarly
structured
problems
-
a
phenomenon
termed
“learning
to
learn”
by
forming
and
reusing
generalizable
neural
representations,
i.e.,
the
schemas.
However,
stability-plasticity
dilemma,
how
exploit
stable
schemas
facilitate
while
remaining
flexible
towards
possible
changes,
is
not
well
understood.
We
hypothesize
that
restricting
specific
functional,
e.g.,
decision-making,
subspace
making
it
orthogonal
other
subspaces
allows
brain
balance
stability
plasticity.
To
test
it,
we
trained
three
macaques
on
visuomotor
mapping
tasks
recorded
activity
in
dorsolateral
premotor
cortex.
By
delineating
decision
stimulus
subspaces,
identified
schema-like
manifold
within
only
subspace.
The
reuse
significantly
facilitated
learning.
In
addition,
exhibited
trend
be
subspace,
minimizing
interference
between
these
two
domains.
Our
results
revealed
functional
domains
can
preserve
useful
maintaining
orthogonality
with
allowing
for
adaptation
new
environments,
thereby
resolving
dilemma.
This
finding
provides
insights
into
mechanisms
underlying
brain’s
capability
learn
both
fast
flexibly,
which
also
inspire
more
efficient
algorithms
artificial
intelligence
systems
working
open,
dynamic
environments.
Frontiers in Computational Neuroscience,
Год журнала:
2024,
Номер
18
Опубликована: Март 22, 2024
The
trend
in
industrial/service
robotics
is
to
develop
robots
that
can
cooperate
with
people,
interacting
them
an
autonomous,
safe
and
purposive
way.
These
are
the
fundamental
elements
characterizing
fourth
fifth
industrial
revolutions
(4IR,
5IR):
crucial
innovation
adoption
of
intelligent
technologies
allow
development
cyber-physical
systems
,
similar
if
not
superior
humans.
common
wisdom
intelligence
might
be
provided
by
AI
(Artificial
Intelligence),
a
claim
supported
more
media
coverage
commercial
interests
than
solid
scientific
evidence.
currently
conceived
quite
broad
sense,
encompassing
LLMs
lot
other
things,
without
any
unifying
principle,
but
self-motivating
for
success
various
areas.
current
view
mostly
follows
purely
disembodied
approach
consistent
old-fashioned,
Cartesian
mind-body
dualism,
reflected
software-hardware
distinction
inherent
von
Neumann
computing
architecture.
working
hypothesis
this
position
paper
road
next
generation
autonomous
robotic
agents
cognitive
capabilities
requires
fully
brain-inspired,
embodied
avoids
trap
dualism
aims
at
full
integration
Bodyware
Cogniware.
We
name
Artificial
Cognition
(ACo)
ground
it
Cognitive
Neuroscience.
It
specifically
focused
on
proactive
knowledge
acquisition
based
bidirectional
human-robot
interaction:
practical
advantage
enhance
generalization
explainability.
Moreover,
we
believe
brain-inspired
network
interactions
necessary
allowing
humans
artificial
agents,
building
growing
level
personal
trust
reciprocal
accountability:
clearly
missing,
although
actively
sought,
AI.
ACo
work
progress
take
number
research
threads,
some
antecedent
early
attempts
define
concepts
methods.
In
rest
will
consider
blocks
need
re-visited
unitary
framework:
principles
developmental
robotics,
methods
action
representation
prospection
capabilities,
role
social
interaction.
Journal of Neuroscience,
Год журнала:
2024,
Номер
44(24), С. e0022242024 - e0022242024
Опубликована: Апрель 11, 2024
Memory
reactivation
during
sleep
is
thought
to
facilitate
memory
consolidation.
Most
research
has
examined
how
of
specific
facts,
objects,
and
associations
benefits
their
overall
retention.
However,
our
memories
are
not
unitary,
all
features
a
persist
in
tandem
over
time.
Instead,
transformed,
with
some
strengthened
others
weakened.
Does
drive
transformation?
We
leveraged
the
Targeted
Reactivation
technique
an
object
category
learning
paradigm
examine
this
question.
Participants
(20
female,
14
male)
learned
three
categories
novel
where
each
had
unique,
distinguishing
as
well
shared
other
members
its
category.
used
real-time
EEG
protocol
cue
these
objects
at
moments
optimized
generate
events.
found
that
improved
for
while
worsening
features,
suggesting
differentiation
process.
The
results
indicate
does
act
holistically
on
memories,
instead
supporting
transformation
enhanced
others.
Transfer
learning,
the
re-application
of
previously
learned
higher-level
regularities
to
novel
input,
is
a
key
challenge
in
cognition.
While
previous
empirical
studies
investigated
human
transfer
learning
supervised
or
reinforcement
for
explicit
knowledge,
it
unknown
whether
such
occurs
during
naturally
more
common
implicit
and
unsupervised
and,
if
so,
how
related
memory
consolidation.
We
compared
newly
acquired
abstract
knowledge
by
extending
visual
statistical
paradigm
context.
found
but
with
important
differences
depending
on
explicitness/implicitness
knowledge.
Observers
acquiring
initial
could
structures
immediately.
In
contrast,
observers
same
amount
showed
opposite
effect,
structural
interference
transfer.
However,
sleep
between
phases,
observers,
while
still
remaining
implicit,
switched
their
behaviour
pattern
as
did.
This
effect
was
specific
not
after
non-sleep
Our
results
highlight
similarities
generalizable
relying
consolidation
restructuring
internal
representations.
One
of
the
most
fundamental
and
striking
limitations
human
cognition
appears
to
be
a
constraint
in
number
control-dependent
processes
that
can
executed
at
one
time.
This
motivates
influential
tenets
cognitive
psychology:
control
relies
on
central,
limited-capacity
processing
mechanism
imposes
seriality
processing.
Here
we
provide
formally
explicit
challenge
this
view.
We
argue
causality
is
reversed:
constraints
behavior
reflect
rational
bound
mechanisms
impose
processing,
prevent
interference
arises
if
two
or
more
tasks
engage
same
representations
required
perform
tasks.
use
both
mathematical
numerical
analyses
shared
neural
network
architectures
formal
grounding
for
argument–historically
known
as
"multiple-resource
theory"–and
demonstrate
its
ability
explain
wide
range
phenomena
associated
with
behavior.
Furthermore,
need
control,
arising
from
by
different
tasks,
reflects
optimization
trade-off
intrinsic
architectures:
increase
learning
efficacy
representations,
versus
efficiency
parallel
(i.e.,
multitasking)
task-dedicated
representations.
The
theory
helps
frame
rigorous,
normative
approach
between
automaticity,
how
relates
other
principles
concerning
function,
computation
generally.
Communications Psychology,
Год журнала:
2024,
Номер
2(1)
Опубликована: Апрель 9, 2024
We
all
possess
a
mental
library
of
schemas
that
specify
how
different
types
events
unfold.
How
are
these
acquired?
A
key
challenge
is
learning
new
schema
can
catastrophically
interfere
with
old
knowledge.
One
solution
to
this
dilemma
use
interleaved
training
learn
single
representation
accommodates
schemas.
However,
another
class
models
posits
catastrophic
interference
be
avoided
by
splitting
off
representations
when
large
prediction
errors
occur.
differentiating
that,
according
models,
prevented
even
under
blocked
curricula.
conducted
series
semi-naturalistic
experiments
and
simulations
Bayesian
neural
network
compare
the
predictions
made
"splitting"
versus
"non-splitting"
hypotheses
learning.
found
better
performance
in
compared
curricula,
explain
results
using
model
incorporates
representational
response
errors.
In
follow-up
experiment,
we
validated
inserting
early
leads
than
later
Our
suggest
environments
(i.e.,
curricula)
play
an
important
role
shaping
composition.
Scientific Reports,
Год журнала:
2024,
Номер
14(1)
Опубликована: Июль 22, 2024
Abstract
It
has
been
proposed
that,
when
processing
a
stream
of
events,
humans
divide
their
experiences
in
terms
inferred
latent
causes
(LCs)
to
support
context-dependent
learning.
However,
shared
structure
is
present
across
contexts,
it
still
unclear
how
the
“splitting”
LCs
and
learning
can
be
simultaneously
achieved.
Here,
we
Latent
Cause
Network
(LCNet),
neural
network
model
LC
inference.
Through
learning,
naturally
stores
that
tasks
weights.
Additionally,
represents
context-specific
using
context
module,
controlled
by
Bayesian
nonparametric
inference
algorithm,
which
assigns
unique
vector
for
each
LC.
Across
three
simulations,
found
LCNet
could
(1)
extract
function
task
while
avoiding
catastrophic
interference,
(2)
capture
human
data
on
curriculum
effects
schema
(3)
infer
underlying
event
naturalistic
videos
daily
events.
Overall,
these
results
demonstrate
computationally
feasible
approach
reconciling
scalable
from
laboratory
experiment
settings
settings.
Transfer
learning,
the
re-application
of
previously
learned
higher-level
regularities
to
novel
input,
is
a
key
challenge
in
cognition.
While
previous
empirical
studies
investigated
human
transfer
learning
supervised
or
reinforcement
for
explicit
knowledge,
it
unknown
whether
such
occurs
during
naturally
more
common
implicit
and
unsupervised
and,
if
so,
how
related
memory
consolidation.
We
compared
newly
acquired
abstract
knowledge
by
extending
visual
statistical
paradigm
context.
found
but
with
important
differences
depending
on
explicitness/implicitness
knowledge.
Observers
acquiring
initial
could
structures
immediately.
In
contrast,
observers
same
amount
showed
opposite
effect,
structural
interference
transfer.
However,
sleep
between
phases,
observers,
while
still
remaining
implicit,
switched
their
behaviour
pattern
as
did.
This
effect
was
specific
not
after
non-sleep
Our
results
highlight
similarities
generalizable
relying
consolidation
restructuring
internal
representations.