*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Improving In-House Testing Using Field Execution Data
Qianqian Wang
Ph.D. student in Computer Science
School of Computer Science
College of Computing
Georgia Institute of Technology
Date: Wednesday, December 5, 2018
Time: 3:15pm-5pm (EST)
Location: Klaus 2100
Committee:
Prof. Alessandro Orso (Advisor), School of Computer Science, Georgia Institute of Technology
Prof. Vivek Sarkar, School of Computer Science, Georgia Institute of Technology
Dr. Spencer Rugaber, School of Computer Science, Georgia Institute of Technology
Prof. Yuriy Brun, College of Information & Computer Science, University of Massachusetts
Abstract:
Software testing is today the most widely used approach for assessing and improving software quality. Despite its popularity, however, software testing has a number of inherent limitations. First, due to resource limitations, in-house tests necessarily exercise only a tiny fraction of all the possible behaviors of a software system. Second, testers typically select this fraction of behaviors to be tested based either on some (more or less rigorous) selection criteria or on their assumptions, intuition, and experience. As a result, in-house tests are typically not representative of the software behavior exercised by real users, which ultimately results in the software behaving incorrectly and failing in the field, after it has been released.
To address this problem, and improve the effectiveness of in-house testing, I propose a set of techniques for measuring and bridging the gap between in-house tests and field executions. My first technique allows for quantifying and analyzing the differences between behaviors exercised in-house and in the field. My second approach leverages the differences identified by my first technique to generate test inputs that mimic field behaviors and can be added to existing in-house test suites. This approach uses a guided symbolic analysis. Finally, my third approach leverages the state observed in the field to improve symbolic input generation and make test generation more effective.