PhD Proposal by Ramyad Hadidi

*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************

Event Details
  • Date/Time:
    • Thursday December 19, 2019
      11:30 am - 1:30 pm
  • Location: Klaus 2100
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Deploying Deep Neural Networks in Edge with Distribution

Full Summary: No summary paragraph submitted.

Title: Deploying Deep Neural Networks in Edge with Distribution

------------

 

Ramyad Hadidi

Ph.D. Student

School of Computer Science

College of Computing

Georgia Institute of Technology

 

Date: Thursday, December 19, 2019

Time: 11:30 AM - 1:30 PM (EST)

Location: Klaus 2100

 

Committee:

------------

Dr. Hyesoon Kim (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Saibal Mukhopadhyay (School of Electrical and Computer Engineering, Georgia Institute of Technology)

Dr. Tushar Krishna (School of Electrical and Computer Engineering, Georgia Institute of Technology)

Dr. Alexey Tumanov (School of Computer Science, Georgia Institute of Technology)

 

 

Abstract:

------------

The widespread applicability of deep neural networks (DNNs) has led edge computing to emerge as a trend to extend our capabilities to several domains such as robotics, autonomous technologies, and Internet-of-things devices. Because of the tight resource constraints of such individual edge devices, computing accurate predictions while providing a fast execution is a key challenge. Moreover, modern DNNs increasingly demand more computation power than their predecessors. As a result, the current approach is to rely on compute resources in the cloud by offloading the inference computations of DNNs. This approach not only does raise privacy concerns but also relies on network infrastructure and data centers that are not scalable and do not guarantee fast execution.

 

Our key insight is that edge devices can break their individual resource constraints by distributing the computation of DNNs on collaborating peer edge devices. In our approach, edge devices cooperate to conduct single-batch inferences in real-time while exploiting several model-parallelism methods. Nonetheless, since communication is costly and current DNN models capture a single-chain of dependency pattern, distributing and parallelizing the computations of current DNNs may not be an effective solution for edge domains. Therefore, to efficiently benefit from computing resources with low communication overhead, we propose new handcrafted edge-tailored models that consist of several independent and narrow DNNs. Additionally, we explore an automated neural architecture search methodology and propose ParallelNets, custom DNN architectures with low communication overheads and high parallelization opportunities.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Dec 10, 2019 - 12:22pm
  • Last Updated: Dec 11, 2019 - 1:45pm