*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Method to Improve Model and Data Privacy in Deep Learning
Committee:
Dr. Queshi, Advisor
Dr. Aregib, Chair
Dr. Krishna
Dr. Prakash
Abstract: The objective of the proposed research is to develop algorithmic and hardware techniques to improve model and data privacy in deep learning during inference and training. The rapid progress in machine learning (ML) has resulted in models that offer state-of-the-art performance for several applications in domains like computer vision, natural language processing, and product recommendation. However, training and deploying these models come with the challenge of preserving the privacy of the parties involved. For instance, during remote ML inference, the service provider’s private model can be stolen by an end-user just using black-box queries to the target model. Conversely, there are no measures in place to protect the privacy of user data in commercial ML inference systems as user data is typically available in an unobfuscated format to the service provider. Furthermore, the problem of data privacy even arises during training when data is distributed across multiple distrusting parties who want to jointly train a model without revealing their private data to each other. This thesis tackles the privacy challenges in deep learning by proposing novel attacks and defenses that push the state-of-the-art for privacy in ML along two privacy dimensions: 1. Model privacy during inference 2. Data privacy during inference and training.