[PAST EVENT] Qi Luo, Computer Science - Ph.D. Dissertation Defense

December 12, 2017
2pm - 3:30pm
Location
McGlothlin-Street Hall, Room 002
251 Jamestown Rd
Williamsburg, VA 23185Map this location

Full Description:
Testing is commonly classified into two categories, nonfunctional testing, and functional testing. The goal of nonfunctional testing is to test nonfunctional requirements, such as performance and reliability. Performance testing is one of the most important types of nonfunctional testing, where an Application Under Testing (AUT) exhibits unexpectedly worse performance (e.g., lower throughput) with some input data. During performance testing, a big and important challenge is to understand a nontrivial AUT?s behaviors with large numbers of combinations of their input parameter values and find the particular subsets of inputs leading to performance bottlenecks. However, finding those inputs and identifying those bottlenecks are mostly manual, intellectually intensive and laborious procedure. Especially, for an evolving software system, some code changes may accidentally degrade performance between two released software versions, while finding problematic changes (out of a large number of committed changes) that may be responsible for performance regressions under certain test inputs. This dissertation presents a set of approaches to automatically find specific combinations of input data for exposing performance bottlenecks and further analyze execution traces to identify performance bottlenecks. In addition, this dissertation also provides an approach that automatically estimates the impact of code changes on performance degradation between two released software versions to identify the problematic ones likely leading to performance regressions.

Functional testing is used to test the functional correctness of AUTs, e.g., checking if the output is as expected. Developers commonly write test suites for AUTs to test different functions and locate faults. During functional testing, developers rely on some strategies to order test cases for achieving a certain object, such as exposing faults faster, which is known as Test Case Prioritization (TCP). TCP techniques are commonly classified into two categories, dynamic and static techniques. A set of empirical studies has been conducted to examine and understand different TCP techniques, while there is a clear gap in existing studies that no one has compared static techniques against dynamic techniques and comprehensively examined the impact of test granularity, efficiency, and the similarities in terms of fault detection on TCP techniques. Thus, this dissertation presents an empirical study to thoroughly compare static and dynamic TCP techniques in terms of effectiveness, efficiency, and similarity of uncovered faults at different granularities on a large set of real-world programs. Moreover, in the literature, TCP techniques have been typically evaluated against synthetic software defects, called mutants. For this reason, it is currently unclear to what extent TCP performance on mutants would be representative of the performance achieved on real faults. To answer this fundamental question, this dissertation conducts the first empirical study investigating TCP performance when applied to both real-world faults and mutation faults, for understanding the representativeness of mutants.

Biography:
Qi Luo is a Ph.D. candidate at William & Mary in the Department of Computer Science. She is a member of the SEMERU Research Group and advised by Dr. Denys Poshyvanyk. Her research interests are in performance testing and regression testing. Qi received a Bachelor degree in Automation from Beihang University in 2008 and a Master degree in Software Engineering from Tsinghua University in 2011. She is a student member of IEEE and ACM.