*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Interactive Scalable Discovery of Concepts, Evolutions, and Vulnerabilities in Deep Learning
Haekyu Park
School of Computational Science and Engineering
College of Computing
Georgia Tech
Date: Tuesday, November 15, 2022
Time: 9am - 11am ET
Location (virtual): https://gatech.zoom.us/j/98788522114?pwd=bUJuYk9hNEZCbnVPeVdMNlZVZWU1UT09
Committee
Dr. Duen Horng (Polo) Chau - Advisor, Georgia Tech, School of Computational Science and Engineering
Dr. Judy Hoffman - Georgia Tech, School of Interactive Computing
Dr. Callie Hao - Georgia Tech, School of Electrical and Computer Engineering
Abstract
Deep Neural Networks (DNNs) are now widely used in society, yet understanding how they work and what they have learned remains a fundamental challenge. How do we scalably discover and summarize concepts that a model has learned? How do such concepts evolve as the model is trained? And how to identify and explain vulnerabilities that the model is susceptible to?
My dissertation addresses these fundamental challenges in AI through a human-centered approach, by bridging Information Visualization and Deep Learning to create novel tools that enable interactive scalable discovery of concepts, evolutions, and vulnerabilities in deep learning. This thesis focuses on three complementary thrusts.
(1) Scalable Visual Discovery to Interpret DNN Mechanism. We develop scalable algorithms and visual interfaces to discover and summarize concepts learned by DNNs. NeuroCartography scalably summarizes concepts learned by a large-scale DNN (e.g., InceptionV1 trained with 1.2M images). It discovers what concepts a DNN learns and how those concepts interact to make predictions, by clustering groups of neurons that detect the same concepts and investigating the associations between related concepts based on how often they co-occur.
(2) Insights to Protect and Troubleshoot Models. We develop scalable interpretation techniques to visualize and pinpoint malfunctioning components in DNNs and to understand how those defects induce incorrect predictions. We develop first-of-its-kind interactive systems such as Bluff that visualizes and compares the activation pathways for benign and attacked images in vision-based neural networks, and SkeletonVis that visualizes and explains how attacks manipulate human joints in human action recognition models.
(3) Scalable Interpretation of Concept Evolution in Deep Learning Training. As interpreting model evolution is crucial to monitoring the network training and can aid proactive interventions, we develop scalable techniques to interpret how models evolve as they are trained. We propose ConceptEvo, a general unified interpretation framework for DNNs to reveal the inception and evolution of concepts during training. We also propose a complementary interactive interface that visualizes such concept evolutions in real time to help users monitor and steer the DNN training.
This thesis contributes to information visualization, deep learning, and more importantly their intersection, including: open-source interactive interfaces, scalable algorithms, and a framework that unifies DNN interpretation across models. Our work is making impact to academia, industry, and the government: our work has contributed to the DARPA GARD program in understanding AI robustness against deception, has been recognized by a J.P. Morgan AI PhD Fellowship and Rising Stars in IEEE EECS, and NeuroCartography has been highlighted as a top visualization publication (top 1%) invited to SIGGRAPH.