*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Architecture and Circuit Design Optimization for Compute-in-Memory
Committee:
Dr. Shimeng Yu, ECE, Chair, Advisor
Dr. Shaolan Li, ECE
Dr. Asif Khan, ECE
Dr. Suman Datta, ECE
Dr. Hyesoon Kim, CoC
Abstract: The objective of the proposed research is to optimize computing-in-memory (CIM) design for accelerating Deep Neural Network (DNN) algorithms. As compute peripheries such as analog-to-digital converter (ADC) introduce significant overhead in CIM inference design, the research first focuses on the circuit optimization for inference acceleration and proposes a resistive random access memory (RRAM) based ADC-free in-memory compute scheme. We prototype a CIM inference chip design with a fully-analog data processing manner between sub-arrays, which can significantly improve the hardware performance over the conventional CIM designs and achieve near-software classification accuracy on ImageNet and CIFAR-10/-100 dataset. Secondly, the research focuses on the hardware support for CIM on-chip training. To maximize hardware reuse of CIM weight stationary dataflow, we propose the CIM training architectures with the transpose weight mapping strategy. The cell design and periphery circuitry are modified to efficiently support bi-directional compute. Moreover, a novel solution of signed number multiplication is also proposed to handle the negative input in backpropagation. Based on the silicon measurement data on a SRAM-based prototype chip, we comprehensively explore the hardware performance of the CIM accelerator for DNN on-chip training.