Proceedings of the National Academy of Sciences,
Год журнала:
2020,
Номер
117(51), С. 32329 - 32339
Опубликована: Дек. 7, 2020
Significance
Visual
short-term
memory
(VSTM)
is
the
ability
to
actively
maintain
visual
information
for
a
short
period
of
time.
Classical
models
posit
that
VSTM
achieved
via
persistent
firing
neurons
in
prefrontal
cortex.
Leveraging
unique
spatiotemporal
resolution
intracranial
EEG
recordings
and
analytical
power
deep
neural
network
uncovering
code
processing,
our
results
suggest
first
dynamically
extracted
multiple
representational
formats,
including
higher-order
format
abstract
semantic
format.
Both
formats
are
stably
maintained
across
an
extended
coupling
phases
hippocampal
low-frequency
activity.
These
human
highly
dynamic
involves
rich
multifaceted
representations,
which
contribute
mechanistic
understanding
VSTM.
2021 IEEE/CVF International Conference on Computer Vision (ICCV),
Год журнала:
2021,
Номер
unknown, С. 2641 - 2651
Опубликована: Окт. 1, 2021
Spiking
Neural
Networks
(SNNs)
have
attracted
enormous
research
interest
due
to
temporal
information
processing
capability,
low
power
consumption,
and
high
biological
plausibility.
However,
the
formulation
of
efficient
high-performance
learning
algorithms
for
SNNs
is
still
challenging.
Most
existing
methods
learn
weights
only,
require
manual
tuning
membrane-related
parameters
that
determine
dynamics
a
single
spiking
neuron.
These
are
typically
chosen
be
same
all
neurons,
which
limits
diversity
neurons
thus
expressiveness
resulting
SNNs.
In
this
paper,
we
take
inspiration
from
observation
different
across
brain
regions,
propose
training
algorithm
capable
not
only
synaptic
but
also
membrane
time
constants
We
show
incorporating
learnable
can
make
network
less
sensitive
initial
values
speed
up
learning.
addition,
reevaluate
pooling
in
find
max-pooling
will
lead
significant
loss
advantage
computation
cost
binary
compatibility.
evaluate
proposed
method
image
classification
tasks
on
both
traditional
static
MNIST,
Fashion-MNIST,
CIFAR-10
datasets,
neuromorphic
N-MNIST,
CIFAR10-DVS,
DVS128
Gesture
datasets.
The
experiment
results
outperforms
state-of-the-art
accuracy
nearly
using
fewer
time-steps.
Our
codes
available
at
https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron.
Current Biology,
Год журнала:
2022,
Номер
32(17), С. 3676 - 3689.e5
Опубликована: Июль 20, 2022
tested
humans,
rats,
and
RL
agents
on
a
novel
modular
maze
d
Humans
rats
were
remarkably
similar
in
their
choice
of
trajectories
Both
species
most
to
utilizing
SR
also
displayed
features
model-based
planning
early
trials
APL Machine Learning,
Год журнала:
2024,
Номер
2(2)
Опубликована: Май 9, 2024
Artificial
neural
networks
(ANNs)
have
emerged
as
an
essential
tool
in
machine
learning,
achieving
remarkable
success
across
diverse
domains,
including
image
and
speech
generation,
game
playing,
robotics.
However,
there
exist
fundamental
differences
between
ANNs’
operating
mechanisms
those
of
the
biological
brain,
particularly
concerning
learning
processes.
This
paper
presents
a
comprehensive
review
current
brain-inspired
representations
artificial
networks.
We
investigate
integration
more
biologically
plausible
mechanisms,
such
synaptic
plasticity,
to
improve
these
networks’
capabilities.
Moreover,
we
delve
into
potential
advantages
challenges
accompanying
this
approach.
In
review,
pinpoint
promising
avenues
for
future
research
rapidly
advancing
field,
which
could
bring
us
closer
understanding
essence
intelligence.
Frontiers in Artificial Intelligence,
Год журнала:
2025,
Номер
7
Опубликована: Фев. 12, 2025
Apart
from
what
(little)
OpenAI
may
be
concealing
us,
we
all
know
(roughly)
how
Large
Language
Models
(LLMs)
such
as
ChatGPT
work
(their
vast
text
databases,
statistics,
vector
representations,
and
huge
number
of
parameters,
next-word
training,
etc.).
However,
none
us
can
say
(hand
on
heart)
that
are
not
surprised
by
has
proved
to
able
do
with
these
resources.
This
even
driven
some
conclude
actually
understands.
It
is
true
it
But
also
understand
do.
I
will
suggest
hunches
about
benign
“biases”—convergent
constraints
emerge
at
the
LLM
scale
helping
so
much
better
than
would
have
expected.
These
biases
inherent
in
nature
language
itself,
scale,
they
closely
linked
lacks
,
which
direct
sensorimotor
grounding
connect
its
words
their
referents
propositions
meanings.
convergent
related
(1)
parasitism
indirect
verbal
grounding,
(2)
circularity
definition,
(3)
“mirroring”
production
comprehension,
(4)
iconicity
(5)
computational
counterparts
human
“categorical
perception”
category
learning
neural
nets,
perhaps
(6)
a
conjecture
Chomsky
laws
thought.
The
exposition
form
dialogue
ChatGPT-4.