IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Journal Year: 2024, Volume and Issue: 43(11), P. 3888 - 3899
Published: Nov. 1, 2024
Recent
advances
in
machine
learning
(ML)
have
spotlighted
the
pressing
need
for
computing
architectures
that
bridge
gap
between
memory
bandwidth
and
processing
power.
The
advent
of
deep
neural
networks
has
pushed
traditional
Von
Neumann
to
their
limits
due
high
latency
energy
consumption
costs
associated
with
data
movement
processor
these
workloads.
One
solutions
overcome
this
bottleneck
is
perform
computation
within
main
through
processing-in-memory
(PIM),
thereby
limiting
it.
However,
dynamic
random-access
memory-based
PIM
struggles
achieve
throughput
efficiency
internal
bottlenecks
frequent
refresh
operations.
In
work,
we
introduce
OPIMA,
a
PIM-based
ML
accelerator,
architected
an
optical
memory.
OPIMA
been
designed
leverage
inherent
massive
parallelism
while
performing
high-speed,
low-energy
accelerate
models
based
on
convolutional
networks.
We
present
comprehensive
analysis
guide
design
choices
operational
mechanisms.
addition,
evaluate
performance
comparing
it
conventional
electronic
systems
emerging
photonic
architectures.
experimental
results
show
can
Language: Английский