OPIMA: Optical Processing-in-Memory for Convolutional Neural Network Acceleration DOI
Febin Sunny, Amin Shafiee, Abhishek Balasubramaniam

et al.

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Journal Year: 2024, Volume and Issue: 43(11), P. 3888 - 3899

Published: Nov. 1, 2024

Recent advances in machine learning (ML) have spotlighted the pressing need for computing architectures that bridge gap between memory bandwidth and processing power. The advent of deep neural networks has pushed traditional Von Neumann to their limits due high latency energy consumption costs associated with data movement processor these workloads. One solutions overcome this bottleneck is perform computation within main through processing-in-memory (PIM), thereby limiting it. However, dynamic random-access memory-based PIM struggles achieve throughput efficiency internal bottlenecks frequent refresh operations. In work, we introduce OPIMA, a PIM-based ML accelerator, architected an optical memory. OPIMA been designed leverage inherent massive parallelism while performing high-speed, low-energy accelerate models based on convolutional networks. We present comprehensive analysis guide design choices operational mechanisms. addition, evaluate performance comparing it conventional electronic systems emerging photonic architectures. experimental results show can $2.98\times $ higher notation="LaTeX">$137\times better than best known prior work.

Language: Английский

CPDM-PCNN: A compact and power efficient photonic Convolutional Neural Network accelerator based on Dual-function Microring Resonators DOI
Xiangyu He, Pengxing Guo, Wei Sun

et al.

Optics & Laser Technology, Journal Year: 2025, Volume and Issue: 188, P. 112889 - 112889

Published: April 15, 2025

Language: Английский

Citations

0

Programmable phase change materials and silicon photonics co-integration for photonic memory applications: a systematic study DOI Creative Commons
Amin Shafiee, B. Charbonnier, Jie Yao

et al.

Journal of Optical Microsystems, Journal Year: 2024, Volume and Issue: 4(03)

Published: Aug. 14, 2024

The integration of phase change materials (PCMs) with photonic devices creates a unique opportunity for realizing application-specific, reconfigurable, and energy-efficient components zero static power consumption low thermal crosstalk. In particular, waveguides based on silicon or nitride can be integrated PCMs to realize nonvolatile memory cells, which are able store data in the state PCMs. We delve into performance comparison PCM-based programmable cells platforms using known (GST GSST) applications while showcasing fundamental limitations related each design terms maximum number bits that they as well their optical insertion loss. Moreover, we present comprehensive design-space exploration analyzing energy efficiency cooling time depending structure heat source. results show silicon-based strip waveguide GST is best option cell highest bit density (up 4-bits per given 6% spacing between transmission levels). addition, considering microheater top PCM deposited, multi-physics simulation source placed above gap 200 nm, tends become more energy-efficient, (for set reset) becomes significantly shorter than case where further from PCM.

Language: Английский

Citations

2

OPIMA: Optical Processing-in-Memory for Convolutional Neural Network Acceleration DOI
Febin Sunny, Amin Shafiee, Abhishek Balasubramaniam

et al.

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Journal Year: 2024, Volume and Issue: 43(11), P. 3888 - 3899

Published: Nov. 1, 2024

Recent advances in machine learning (ML) have spotlighted the pressing need for computing architectures that bridge gap between memory bandwidth and processing power. The advent of deep neural networks has pushed traditional Von Neumann to their limits due high latency energy consumption costs associated with data movement processor these workloads. One solutions overcome this bottleneck is perform computation within main through processing-in-memory (PIM), thereby limiting it. However, dynamic random-access memory-based PIM struggles achieve throughput efficiency internal bottlenecks frequent refresh operations. In work, we introduce OPIMA, a PIM-based ML accelerator, architected an optical memory. OPIMA been designed leverage inherent massive parallelism while performing high-speed, low-energy accelerate models based on convolutional networks. We present comprehensive analysis guide design choices operational mechanisms. addition, evaluate performance comparing it conventional electronic systems emerging photonic architectures. experimental results show can $2.98\times $ higher notation="LaTeX">$137\times better than best known prior work.

Language: Английский

Citations

2