ACM Computing Surveys,
Journal Year:
2022,
Volume and Issue:
55(9), P. 1 - 52
Published: Aug. 24, 2022
This
is
Part
II
of
the
two-part
comprehensive
survey
devoted
to
a
computing
framework
most
commonly
known
under
names
Hyperdimensional
Computing
and
Vector
Symbolic
Architectures
(HDC/VSA).
Both
refer
family
computational
models
that
use
high-dimensional
distributed
representations
rely
on
algebraic
properties
their
key
operations
incorporate
advantages
structured
symbolic
vector
representations.
Holographic
Reduced
Representations
an
influential
HDC/VSA
model
well-known
in
machine
learning
domain
often
used
whole
family.
However,
for
sake
consistency,
we
field.
I
this
covered
foundational
aspects
field,
such
as
historical
context
leading
development
HDC/VSA,
elements
any
model,
models,
transformation
input
data
various
types
into
vectors
suitable
HDC/VSA.
second
part
surveys
existing
applications,
role
cognitive
architectures,
well
directions
future
work.
Most
applications
lie
within
Machine
Learning/Artificial
Intelligence
domain,
however,
also
cover
other
provide
complete
picture.
The
written
be
useful
both
newcomers
practitioners.
2021 IEEE/CVF International Conference on Computer Vision (ICCV),
Journal Year:
2021,
Volume and Issue:
unknown, P. 3458 - 3468
Published: Oct. 1, 2021
Self-supervised
pretraining
followed
by
supervised
fine-tuning
has
seen
success
in
image
recognition,
especially
when
labeled
examples
are
scarce,
but
received
limited
attention
medical
analysis.
This
paper
studies
the
effectiveness
of
self-supervised
learning
as
a
pre-training
strategy
for
classification.
We
conduct
experiments
on
two
distinct
tasks:
dermatology
condition
classification
from
digital
camera
images
and
multi-label
chest
X-ray
classification,
demonstrate
that
ImageNet,
additional
unlabeled
domain-specific
significantly
improves
accuracy
classifiers.
introduce
novel
Multi-Instance
Contrastive
Learning
(MICLe)
method
uses
multiple
underlying
pathology
per
patient
case,
available,
to
construct
more
informative
positive
pairs
learning.
Combining
our
contributions,
we
achieve
an
improvement
6.7%
top-1
1.1%
mean
AUC
respectively,
outperforming
strong
baselines
pretrained
ImageNet.
In
addition,
show
big
models
robust
distribution
shift
can
learn
efficiently
with
small
number
images.
Clinical Microbiology Reviews,
Journal Year:
2021,
Volume and Issue:
34(3)
Published: May 11, 2021
The
coronavirus
disease
2019
(COVID-19)
pandemic,
caused
by
severe
acute
respiratory
2
(SARS-CoV-2),
has
led
to
millions
of
confirmed
cases
and
deaths
worldwide.
Efficient
diagnostic
tools
are
in
high
demand,
as
rapid
large-scale
testing
plays
a
pivotal
role
patient
management
decelerating
spread.
IEEE Access,
Journal Year:
2021,
Volume and Issue:
9, P. 30551 - 30572
Published: Jan. 1, 2021
Novel
coronavirus
(COVID-19)
outbreak,
has
raised
a
calamitous
situation
all
over
the
world
and
become
one
of
most
acute
severe
ailments
in
past
hundred
years.
The
prevalence
rate
COVID-19
is
rapidly
rising
every
day
throughout
globe.
Although
no
vaccines
for
this
pandemic
have
been
discovered
yet,
deep
learning
techniques
proved
themselves
to
be
powerful
tool
arsenal
used
by
clinicians
automatic
diagnosis
COVID-19.
This
paper
aims
overview
recently
developed
systems
based
on
using
different
medical
imaging
modalities
like
Computer
Tomography
(CT)
X-ray.
review
specifically
discusses
provides
insights
well-known
data
sets
train
these
networks.
It
also
highlights
partitioning
various
performance
measures
researchers
field.
A
taxonomy
drawn
categorize
recent
works
proper
insight.
Finally,
we
conclude
addressing
challenges
associated
with
use
methods
detection
probable
future
trends
research
area.
aim
facilitate
experts
(medical
or
otherwise)
technicians
understanding
ways
are
regard
how
they
can
potentially
further
utilized
combat
outbreak
Informatics in Medicine Unlocked,
Journal Year:
2020,
Volume and Issue:
20, P. 100427 - 100427
Published: Jan. 1, 2020
Early
detection
and
diagnosis
are
critical
factors
to
control
the
COVID-19
spreading.
A
number
of
deep
learning-based
methodologies
have
been
recently
proposed
for
screening
in
CT
scans
as
a
tool
automate
help
with
diagnosis.
These
approaches,
however,
suffer
from
at
least
one
following
problems:
(i)
they
treat
each
scan
slice
independently
(ii)
methods
trained
tested
sets
images
same
dataset.
Treating
slices
means
that
patient
may
appear
training
test
time
which
produce
misleading
results.
It
also
raises
question
whether
should
be
evaluated
group
or
not.
Moreover,
using
single
dataset
concerns
about
generalization
methods.
Different
datasets
tend
present
varying
quality
come
different
types
machines
reflecting
conditions
countries
cities
where
from.
In
order
address
these
two
problems,
this
work,
we
propose
an
Efficient
Deep
Learning
Technique
voting-based
approach.
approach,
given
classified
voting
system.
The
approach
is
biggest
analysis
patient-based
split.
cross
study
presented
assess
robustness
models
more
realistic
scenario
data
comes
distributions.
cross-dataset
has
shown
power
learning
far
acceptable
task
since
accuracy
drops
87.68%
56.16%
on
best
evaluation
scenario.
results
highlighted
aim
CT-images
improve
significantly
considered
clinical
option
larger
diverse
needed
evaluate
IEEE Journal of Biomedical and Health Informatics,
Journal Year:
2020,
Volume and Issue:
24(10), P. 2806 - 2813
Published: Sept. 10, 2020
The
pandemic
of
coronavirus
disease
2019
(COVID-19)
has
lead
to
a
global
public
health
crisis
spreading
hundreds
countries.
With
the
continuous
growth
new
infections,
developing
automated
tools
for
COVID-19
identification
with
CT
image
is
highly
desired
assist
clinical
diagnosis
and
reduce
tedious
workload
interpretation.
To
enlarge
datasets
machine
learning
methods,
it
essentially
helpful
aggregate
cases
from
different
medical
systems
robust
generalizable
models.
This
paper
proposes
novel
joint
framework
perform
accurate
by
effectively
heterogeneous
distribution
discrepancy.
We
build
powerful
backbone
redesigning
recently
proposed
COVID-Net
in
aspects
network
architecture
strategy
improve
prediction
accuracy
efficiency.
On
top
our
improved
backbone,
we
further
explicitly
tackle
cross-site
domain
shift
conducting
separate
feature
normalization
latent
space.
Moreover,
propose
use
contrastive
training
objective
enhance
invariance
semantic
embeddings
boosting
classification
performance
on
each
dataset.
develop
evaluate
method
two
large-scale
made
up
images.
Extensive
experiments
show
that
approach
consistently
improves
performanceson
both
datasets,
outperforming
original
trained
dataset
12.16%
14.23%
AUC
respectively,
also
exceeding
existing
state-of-the-art
multi-site
methods.
Sensors,
Journal Year:
2021,
Volume and Issue:
21(2), P. 455 - 455
Published: Jan. 11, 2021
This
paper
explores
how
well
deep
learning
models
trained
on
chest
CT
images
can
diagnose
COVID-19
infected
people
in
a
fast
and
automated
process.
To
this
end,
we
adopted
advanced
network
architectures
proposed
transfer
strategy
using
custom-sized
input
tailored
for
each
architecture
to
achieve
the
best
performance.
We
conducted
extensive
sets
of
experiments
two
image
datasets,
namely,
SARS-CoV-2
CT-scan
COVID19-CT.
The
results
show
superior
performances
our
compared
with
previous
studies.
Our
achieved
average
accuracy,
precision,
sensitivity,
specificity,
F1-score
values
99.4%,
99.6%,
99.8%,
99.4%
dataset,
92.9%,
91.3%,
93.7%,
92.2%,
92.5%
COVID19-CT
respectively.
For
better
interpretability
results,
applied
visualization
techniques
provide
visual
explanations
models’
predictions.
Feature
visualizations
learned
features
well-separated
clusters
representing
non-COVID-19
cases.
Moreover,
indicate
that
are
not
only
capable
identifying
cases
but
also
accurate
localization
COVID-19-associated
regions,
as
indicated
by
well-trained
radiologists.
Pretrained
language
models
such
as
BERT,
GPT
have
shown
great
effectiveness
in
understanding.
The
auxiliary
predictive
tasks
existing
pretraining
approaches
are
mostly
defined
on
tokens,
thus
may
not
be
able
to
capture
sentence-level
semantics
very
well.
To
address
this
issue,
we
propose
CERT:
Contrastive
self-supervised
Encoder
Representations
from
Transformers,
which
pretrains
representation
using
contrastive
learning
at
the
sentence
level.
CERT
creates
augmentations
of
original
sentences
back-translation.
Then
it
finetunes
a
pretrained
encoder
(e.g.,
BERT)
by
predicting
whether
two
augmented
originate
same
sentence.
is
simple
use
and
can
flexibly
plugged
into
any
pretraining-finetuning
NLP
pipeline.
We
evaluate
three
understanding
tasks:
CoLA,
RTE,
QNLI.
outperforms
BERT
significantly.<br>