Ph.D. Dissertation Defense - Hongwu Jiang

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Wednesday September 28, 2022
      10:00 am - 12:00 pm
  • Location: https://teams.microsoft.com/l/meetup-join/19%3ameeting_N2Y0NDMyY2EtODNkMS00Mjc0LThiNmQtNGM5NTQ0MDViYzgw%40thread.v2/0?context=%7b%22Tid%22%3a%22482198bb-ae7b-4b25-8b7a-6d7f32faa083%22%2c%22Oid%22%3a%22dec249b4-8b59-4610-84ef-e1ded6b8e893%22%7d
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Architecture and Circuit Design Optimization for Compute-in-Memory

Full Summary: No summary paragraph submitted.

TitleArchitecture and Circuit Design Optimization for Compute-in-Memory

Committee:

Dr. Shimeng Yu, ECE, Chair, Advisor

Dr. Shaolan Li, ECE

Dr. Asif Khan, ECE

Dr. Suman Datta, ECE

Dr. Hyesoon Kim, CoC

Abstract: The objective of the proposed research is to optimize computing-in-memory (CIM) design for accelerating Deep Neural Network (DNN) algorithms. As compute peripheries such as analog-to-digital converter (ADC) introduce significant overhead in CIM inference design, the research first focuses on the circuit optimization for inference acceleration and proposes a resistive random access memory (RRAM) based ADC-free in-memory compute scheme. We prototype a CIM inference chip design with a fully-analog data processing manner between sub-arrays, which can significantly improve the hardware performance over the conventional CIM designs and achieve near-software classification accuracy on ImageNet and CIFAR-10/-100 dataset. Secondly, the research focuses on the hardware support for CIM on-chip training. To maximize hardware reuse of CIM weight stationary dataflow, we propose the CIM training architectures with the transpose weight mapping strategy. The cell design and periphery circuitry are modified to efficiently support bi-directional compute. Moreover, a novel solution of signed number multiplication is also proposed to handle the negative input in backpropagation. Based on the silicon measurement data on a SRAM-based prototype chip, we comprehensively explore the hardware performance of the CIM accelerator for DNN on-chip training.

Additional Information

In Campus Calendar
No
Groups

ECE Ph.D. Dissertation Defenses

Invited Audience
Public
Categories
Other/Miscellaneous
Keywords
Phd Defense, graduate students
Status
  • Created By: Daniela Staiculescu
  • Workflow Status: Published
  • Created On: Sep 21, 2022 - 2:00pm
  • Last Updated: Sep 21, 2022 - 2:00pm