The
ability
to
control
a
behavioral
task
or
stimulate
neural
activity
based
on
animal
behavior
in
real-time
is
an
important
tool
for
experimental
neuroscientists.
Ideally,
such
tools
are
noninvasive,
low-latency,
and
provide
interfaces
trigger
external
hardware
posture.
Recent
advances
pose
estimation
with
deep
learning
allows
researchers
train
networks
accurately
quantify
wide
variety
of
behaviors.
Here,
we
new
<monospace>DeepLabCut-Live!</monospace>
package
that
achieves
low-latency
(within
15
ms,
>100
FPS),
additional
forward-prediction
module
zero-latency
feedback,
dynamic-cropping
mode
higher
inference
speeds.
We
also
three
options
using
this
ease:
(1)
stand-alone
GUI
(called
<monospace>DLC-Live!
GUI</monospace>),
integration
into
(2)
<monospace>Bonsai,</monospace>
(3)
<monospace>AutoPilot</monospace>.
Lastly,
benchmarked
performance
range
systems
so
experimentalists
can
easily
decide
what
required
their
needs.
Videos
of
animal
behavior
are
used
to
quantify
researcher-defined
behaviors
interest
study
neural
function,
gene
mutations,
and
pharmacological
therapies.
Behaviors
often
scored
manually,
which
is
time-consuming,
limited
few
behaviors,
variable
across
researchers.
We
created
DeepEthogram:
software
that
uses
supervised
machine
learning
convert
raw
video
pixels
into
an
ethogram,
the
present
in
each
frame.
DeepEthogram
designed
be
general-purpose
applicable
species,
video-recording
hardware.
It
convolutional
networks
compute
motion,
extract
features
from
motion
images,
classify
behaviors.
classified
with
above
90%
accuracy
on
single
frames
videos
mice
flies,
matching
expert-level
human
performance.
accurately
predicts
rare
requires
little
training
data,
generalizes
subjects.
A
graphical
interface
allows
beginning-to-end
analysis
without
end-user
programming.
DeepEthogram's
rapid,
automatic,
reproducible
labeling
may
accelerate
enhance
analysis.
Code
available
at:
https://github.com/jbohnslav/deepethogram.
Communications Biology,
Год журнала:
2022,
Номер
5(1)
Опубликована: Ноя. 18, 2022
Abstract
Quantification
and
detection
of
the
hierarchical
organization
behavior
is
a
major
challenge
in
neuroscience.
Recent
advances
markerless
pose
estimation
enable
visualization
high-dimensional
spatiotemporal
behavioral
dynamics
animal
motion.
However,
robust
reliable
technical
approaches
are
needed
to
uncover
underlying
structure
these
data
segment
into
discrete
hierarchically
organized
motifs.
Here,
we
present
an
unsupervised
probabilistic
deep
learning
framework
that
identifies
from
variational
embeddings
motion
(VAME).
By
using
mouse
model
beta
amyloidosis
as
use
case,
show
VAME
not
only
motifs,
but
also
captures
representation
motif’s
usage.
The
approach
allows
for
grouping
motifs
communities
differences
community-specific
motif
usage
individual
cohorts
were
undetectable
by
human
visual
observation.
Thus,
segmentation
applicable
wide
range
experimental
setups,
models
conditions
without
requiring
supervised
or
a-priori
interference.
Molecular Psychiatry,
Год журнала:
2023,
Номер
28(3), С. 993 - 1003
Опубликована: Янв. 12, 2023
Abstract
Mental
disorders
are
a
significant
cause
of
disability
worldwide.
They
profoundly
affect
individuals’
well-being
and
impose
substantial
financial
burden
on
societies
governments.
However,
despite
decades
extensive
research,
the
effectiveness
current
therapeutics
for
mental
is
often
not
satisfactory
or
well
tolerated
by
patient.
Moreover,
most
novel
therapeutic
candidates
fail
in
clinical
testing
during
expensive
phases
(II
III),
which
results
withdrawal
pharma
companies
from
investing
field.
It
also
brings
into
question
using
animal
models
preclinical
studies
to
discover
new
agents
predict
their
potential
treating
illnesses
humans.
Here,
we
focus
rodents
as
propose
that
they
essential
investigations
candidate
agents’
mechanisms
action
safety
efficiency.
Nevertheless,
argue
there
need
paradigm
shift
methodologies
used
measure
behavior
laboratory
settings.
Specifically,
behavioral
readouts
obtained
short,
highly
controlled
tests
impoverished
environments
social
contexts
proxies
complex
human
might
be
limited
face
validity.
Conversely,
monitored
more
naturalistic
over
long
periods
display
ethologically
relevant
behaviors
reflect
evolutionarily
conserved
endophenotypes
translational
value.
We
present
how
semi-natural
setups
groups
mice
individually
tagged,
video
recorded
continuously
can
attainable
affordable.
open-source
machine-learning
techniques
pose
estimation
enable
continuous
automatic
tracking
individual
body
parts
periods.
The
trajectories
each
further
subjected
supervised
machine
learning
algorithms
detection
specific
(e.g.,
chasing,
biting,
fleeing)
unsupervised
motifs
stereotypical
movements
harder
name
label
manually).
Compared
animals
wild,
compatible
with
neural
genetic
manipulation
techniques.
As
such,
study
neurobiological
underlying
behavior.
Hence,
suggest
such
possesses
best
out
classical
ethology
reductive
behaviorist
approach
may
provide
breakthrough
discovering
efficient
therapies
illnesses.
bioRxiv (Cold Spring Harbor Laboratory),
Год журнала:
2023,
Номер
unknown
Опубликована: Март 17, 2023
Abstract
Keypoint
tracking
algorithms
have
revolutionized
the
analysis
of
animal
behavior,
enabling
investigators
to
flexibly
quantify
behavioral
dynamics
from
conventional
video
recordings
obtained
in
a
wide
variety
settings.
However,
it
remains
unclear
how
parse
continuous
keypoint
data
into
modules
out
which
behavior
is
organized.
This
challenge
particularly
acute
because
susceptible
high
frequency
jitter
that
clustering
can
mistake
for
transitions
between
modules.
Here
we
present
keypoint-MoSeq,
machine
learning-based
platform
identifying
(“syllables”)
without
human
supervision.
Keypoint-MoSeq
uses
generative
model
distinguish
noise
effectively
identify
syllables
whose
boundaries
correspond
natural
sub-second
discontinuities
inherent
mouse
behavior.
outperforms
commonly
used
alternative
methods
at
these
transitions,
capturing
correlations
neural
activity
and
classifying
either
solitary
or
social
behaviors
accordance
with
annotations.
therefore
renders
grammar
accessible
many
researchers
who
use
standard
capture
Nature Methods,
Год журнала:
2024,
Номер
21(7), С. 1329 - 1339
Опубликована: Июль 1, 2024
Abstract
Keypoint
tracking
algorithms
can
flexibly
quantify
animal
movement
from
videos
obtained
in
a
wide
variety
of
settings.
However,
it
remains
unclear
how
to
parse
continuous
keypoint
data
into
discrete
actions.
This
challenge
is
particularly
acute
because
are
susceptible
high-frequency
jitter
that
clustering
mistake
for
transitions
between
Here
we
present
keypoint-MoSeq,
machine
learning-based
platform
identifying
behavioral
modules
(‘syllables’)
without
human
supervision.
Keypoint-MoSeq
uses
generative
model
distinguish
noise
behavior,
enabling
identify
syllables
whose
boundaries
correspond
natural
sub-second
discontinuities
pose
dynamics.
outperforms
commonly
used
alternative
methods
at
these
transitions,
capturing
correlations
neural
activity
and
behavior
classifying
either
solitary
or
social
behaviors
accordance
with
annotations.
also
works
multiple
species
generalizes
beyond
the
syllable
timescale,
fast
sniff-aligned
movements
mice
spectrum
oscillatory
fruit
flies.
Keypoint-MoSeq,
therefore,
renders
accessible
modular
structure
through
standard
video
recordings.