*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Architecture and Circuit Design Optimization for Compute-In-Memory
Committee:
Dr. Yu, Advisor
Dr. Shaolan Li, Chair
Dr. Raychowdhury
Abstract: The objective of the proposed research is to optimize computing-in-memory (CIM) design for accelerating Deep Neural Network (DNN) algorithms. As analog-to-digital converter (ADC) introduces significant overhead in CIM inference design, we first comprehensively explore the trade-offs involving different types of ADCs and investigate a new ADC design especially suited for the CIM, which performs the analog shift-add for multiple weight significance bits. The analog shift-add ADC achieved large improvement on throughput and energy efficiency compared with conventional ADC designs under similar area constraint. Secondly, the research focuses on the hardware support for CIM on-chip training. To maximize hardware reuse of CIM weight stationary dataflow, we propose a RRAM-based CIM training architecture with a transpose weight mapping strategy. The cell design and periphery circuitry are modified to efficiently support bi-directional compute. Moreover, a novel solution of signed number multiplication is also proposed to handle the negative input in backpropagation. To further alleviate the existing limitations in CIM designs, the future research will mainly focus on two topics: 1) an ADC-free CIM inference chip design with a fully-analog data processing manner between sub-arrays. 2) SRAM-based CIM training architecture optimization and evaluation based on a fabricated CIM macro design.