*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Speaker
Aditya Nori, Microsoft Research
Title
Invariant Generation: A Machine Learning Perspective
Abstract
Computing good invariants is key to effective and efficient program verification. In this talk I will describe our experiences in using machine learning techniques (Bayesian inference, SVMs) for computing invariants useful for program verification. The first project ANEK uses Bayesian inference to compute invariants useful for modular typestate verification of programs. These invariants consists of pre and postconditions along with aliasing invariants known as access permissons. A novel feature of ANEK is that it can generate program invariants even when the code under analysis gives rise to conflicting constraints, a situation that typically occurs when there are bugs. The design of ANEK also makes it easy to add heuristic constraints that encode intuitions gleaned from several years of experience writing specifications, and this allows it to infer invariants that are better in a subjective sense. The ANEK algorithm is based on a modular analysis that makes it fast and scalable, while producing reliable and sound invariants. Our implementation of ANEK infers access permissions invariants used by the PLURAL modular typestate checker for Java programs and we present empirical results that demonstrate the effectiveness of ANEK.
In the second project INTERPOL, we show how interpolants can be viewed as classifiers in supervised machine learning. This view has several advantages: First, we are able to use off-the-shelf classification techniques, in particular support vector machines (SVMs), for interpolation. Second, we show that SVMs can find relevant predicates for a number of benchmarks. Since classification algorithms are predictive, the interpolants computed via classification are likely to be relevant predicates or invariants. Finally, the machine learning view also enables us to handle superficial non-linearities. Even if the underlying problem structure is linear, the symbolic constraints can give an impression that we are solving a non-linear problem. Since learning algorithms try to mine the underlying structure directly, we can discover the linear structure for such problems. We demonstrate the feasibility of INTERPOL via experiments over benchmarks from various papers on program verification.
Speaker Bio
Aditya Nori is a member of the programming languages and machine learning groups at Microsoft Research India. He is also an adjunct professor at IIT Hyderabad. His research interests include algorithms for the analysis of programs and machine learning with special focus on tools for improving software reliability and programmer productivity. He received his PhD in Computer Science from the Indian Institute of Science, Bangalore.