Kwon Receives Honorable Mention for Dissertation at ISCA 2021

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Contact

Jackie Nemeth

School of Electrical and Computer Engineering

404-894-2906

Sidebar Content
No sidebar content submitted.
Summaries

Summary Sentence:

Hyoukjun Kwon, a recent Georgia Tech Ph.D. graduate, received an Honorable Mention at the 2021 ACM-SIGARCH / IEEE-CS TCCA Outstanding Dissertation Award ceremony, presented virtually at the International Symposium on Computer Architecture (ISCA) 2021.

Full Summary:

Hyoukjun Kwon, a recent Georgia Tech Ph.D. graduate, received an Honorable Mention at the 2021 ACM-SIGARCH / IEEE-CS TCCA Outstanding Dissertation Award ceremony, presented virtually at the International Symposium on Computer Architecture (ISCA) 2021.

Media
  • Hyoukjun Kwon Hyoukjun Kwon
    (image/png)

Hyoukjun Kwon, a recent Georgia Tech Ph.D. graduate, received an Honorable Mention at the 2021 ACM-SIGARCH / IEEE-CS TCCA Outstanding Dissertation Award ceremony, presented virtually at the International Symposium on Computer Architecture (ISCA) 2021. ISCA 2021 was held June 14-19 and is the flagship venue for showcasing new ideas and research results in computer architecture.

Kwon’s award citation reads: “For developing mechanisms to quantify the relationship between deep neural network mappings, data reuse, and communication flows for system design of flexible deep learning accelerators.” He is currently a research scientist at Facebook Reality Labs, where he has worked since October 2020. Kwon completed his Ph.D. under the guidance of Tushar Krishna, who is the ON Semiconductor Junior Professor in the School of Electrical and Computer Engineering.

Machine learning (ML), especially deep learning (DL), has demonstrated great results in computer vision, speech recognition, natural language processing, and recommendation systems. This has energized the entire field of computer architecture to develop customized hardware accelerators to enable deployment of DL solutions in the edge and cloud. Traditionally, the efficiency of domain-specific accelerators has come from specialization, which takes place when the control path and datapath in the accelerator is tailored to the deep neural network (DNN). A key challenge facing the architecture community is that the field of ML is evolving very rapidly, so there is a very real possibility that silicon accelerator chips will be obsolete by the time they reach the market.

In order to solve this conundrum, Kwon’s dissertation proposes the key idea of flexible DNN accelerators. In contrast with a fully programmable processor, such as a CPU/GPU, or a fully reconfigurable circuit, such as an FPGA, a flexible accelerator adds small-overhead-but-high impact reconfigurability for future-proofing. Furthermore, Kwon’s thesis shows that these kinds of future-proofing technologies can also improve performance on existing neural networks, as it allows the hardware to better tailor itself to the diverse set of layer parameters, instead of targeting the average case.

Kwon’s dissertation also develops a foundational and formal understanding of the complex interplay between the DNN model, its mapping, dataflow, memory accesses, communication flows, and microarchitectural choices. It also presents a suite of open-source software and hardware codebases demonstrating these ideas.

Kwon’s thesis has already shown measurable impact. Many foundational ideas are part of a Synthesis Lectures on DNN accelerator design, co-authored by Kwon; Michael Pellauer (NVIDIA), who mentored Kwon closely throughout his Ph.D. as a co-advisor; Angshuman Parashar (NVIDIA); Ananda Samajdar, a fellow Georgia Tech Ph.D. student; and Krishna. In addition, some of the open-source artifacts developed by the thesis, such as MAERI and MAESTRO, are already being used by several research groups in industry, national labs, and other universities.

###

* IEEE-CS TCCA stands for the IEEE Computer Society Technical Committee on Computer Architecture.

Related Links

Additional Information

Groups

School of Electrical and Computer Engineering

Categories
Institute and Campus, Alumni, Student and Faculty, Student Research, Research, Computer Science/Information Technology and Security, Engineering
Related Core Research Areas
Data Engineering and Science, Electronics and Nanotechnology
Newsroom Topics
No newsroom topics were selected.
Keywords
Hyoukjun Kwon, Tushar Krishna, School of Electrical and Computer Engineering, Synergy Lab, Georgia Tech, ACM-SIGARCH / IEEE-CS TCCA Outstanding Dissertation Award, International Symposium on Computer Architecture, IEEE Computer Society, Association for Computing Machinery, Deep Neural Networks, deep learning accelerators, machine learning, deep learning, computer vision, speech recognition, natural language processing, recommendation systems, computer architecture, hardware accelerators, edge, cloud, domain-specific accelerators, nvidia, MAERI, MAESTRO
Status
  • Created By: Jackie Nemeth
  • Workflow Status: Published
  • Created On: Aug 16, 2021 - 1:25pm
  • Last Updated: Aug 16, 2021 - 1:31pm