*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Algorithm-hardware co-design for deep learning and beyond with emerging non-volatile memories
Committee:
Dr. Yu, Advisor
Dr. Hao, Chair
Dr. Mukhopdhyay
Abstract: The objective of the proposed research is to design energy-efficient compute-in-memory (CIM) architectures for deep learning applications and beyond with co-optimization of algorithms and hardware. First, NeuroSim is an integrated benchmark framework supporting flexible design options of CIM accelerators from device-level to circuit-level and up to algorithm-level. We validated NeuroSim with actual silicon data and calibrated with some adjustment factors to account for the transistor sizing and wiring area in the layout, gate switching activity and post-layout performance drop, achieving the chip-level error only <2% after the calibration. Based on this simulator, we then explored the CIM architecture for deep neural networks (DNN) with limited on-chip resources, i.e., when the chip area is constrained to hold all the weights of the large-scale models, and how to realize runtime reconfiguration on a custom CIM chip instance with fixed hardware resources. We designed efficient weight reloading schemes and flexible hardware peripherals to enable reconfigurability. At last, we extended the CIM scheme to probabilistic computing utilizing the memory stochasticity, and presented a software-hardware co-design for Bayesian neural network making use of the inherent random noise of memory devices with trivial hardware overhead but achieving much better uncertainty calibration compared with conventional DNN.