*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Reasoning about Programs in Statistically Modeled Environments
Ravi Mangal
School of Computer Science
Georgia Institute of Technology
https://www.cc.gatech.edu/~rmangal3/
Date: Thursday, July 30th, 2020
Time: 11:00 AM -1:00 PM (EDT)
Location: https://bluejeans.com/350947608
**Note: this proposal is remote-only**
Committee:
Dr. Alessandro Orso (Advisor), School of Computer Science, Georgia Institute of Technology
Dr. Vivek Sarkar, School of Computer Science, Georgia Institute of Technology
Dr. Qirun Zhang, School of Computer Science, Georgia Institute of Technology
Dr. Bill Harris, Principal Scientist, Galois,Inc
Dr. Aditya Nori, Sr. Principal Research Manager, Microsoft Research
Abstract:
The objects of study in this proposal are programs and algorithms that use the syntactic structure of programs to reason about them. Such algorithms, referred to as program verification algorithms in the literature, are designed to find proofs (in some background logic) of propositions about program behavior.
This proposal adopts the perspective that programs operate in a world/environment that can be modeled statistically. In other words, the inputs of a program are drawn from some distribution. This statistical perspective is relatively unexplored, though not new, in the context of program verification literature, and it enables us to formulate probabilistic propositions about program behavior. Probabilistic propositions are particularly useful for reasoning about programs learnt from data (like neural networks) that are not expected to exhibit the desired behavior on all program inputs (or in all environments). In this proposal, I define a notion of probabilistic robustness/Lipschitzness for neural networks. A trained neural network f is probabilistically robust if, for a randomly generated pair of inputs, f is likely to demonstrate k-Lipschitzness, i.e., the distance between the outputs computed by f is upper-bounded by the kth multiple of the distance between the pair of inputs. I also present verification algorithms for finding proofs of probabilistic robustness of neural networks.
Verifying probabilistic robustness of neural networks is just one application of the statistical perspective on program environments. In order to enable generic program verification algorithms (that are agnostic to the proposition to be proven or the programming language) to benefit from this statistical perspective, this proposal presents observational abstract interpreters, a generalization of standard abstract interpreters that are used for computing semantic program invariants. The invariants computed by observational abstract interpreters are permitted to be hypothetical (i.e. conditioned on hypotheses), with the hypotheses based on observed behaviors of the program. The validity of the proofs of program correctness computed using hypothetical semantic invariants is ensured by embedding the hypotheses as run time/dynamic checks in the program. We state the metatheoretic guarantees, i.e., theorems about the behavior of observational abstract interpreters, that are expected to hold. These metatheoretic propositions remain to be proven. Proving these theorems and mechanizing them in a proof assistant represents the pending work.