*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Computing-in-Memory for Accelerating Deep Neural Networks
Committee:
Dr. Yu, Advisor
Dr. Lim, Chair
Dr. Raychowdhury
Abstract:
The objective of the proposed research is to accelerate deep neural networks (DNNs) with emerging non-volatile memories (eNVMs) based computing-in-memory (CIM) architecture. The research first focuses on the inference of DNNs and proposes a resistive random access memory (RRAM) based architecture with customized synaptic weight cell design for implementing binary neural networks (BNNs), showing great potential in terms of area- and energy-efficiency. A prototype chip that monolithically integrated an RRAM array and CMOS peripheral circuits was then fabricated using commercial 90nm process, which not only demonstrates the feasibility of the proposed CIM operation, but also validates the superiorities shown in the benchmark. Moreover, to overcome the challenges posed by the nonlinearity and asymmetry in eNVMs’ conductance tuning, this research proposes a novel 2-transistor-1-FeFET (ferroelectric field-effect transistor) based synaptic weight cell that exploits hybrid precision for in-situ training and inference, which achieves software-comparable classification accuracy on MNIST and CIFAR-10 dataset. The remaining part of this research will mainly focus on two topics: 1) a thorough investigation into the impact of eNVMs’ non-ideal characteristics on the training and inference of DNNs; 2) second-generation RRAM based inference chip design with configurable 1/2/4/8-bit activation/weight precision.