Computer Science Events
[PAST EVENT] Michele Tufano, Computer Science - Ph.D. Dissertation Defense
Abstract:
Source code evolves ? inevitably ? to remain useful, secure, correct, readable, and efficient. Developers perform software evolution and maintenance activities by transforming existing source code via corrective, adaptive, perfective, and preventive changes. These code changes are usually managed and stored by a variety of tools and infrastructures such as version control, issue trackers, and code review systems. Software Evolution and Maintenance researchers have been mining these code archives in order to distill useful insights on the nature of such developers? activities. One of the long-lasting goal of Software Engineering research is to better support and automate different types of code changes performed by developers. In this thesis we depart from classic manually crafted rule- or heuristic-based approaches, and propose a novel technique to learn code transformations by leveraging the vast amount of publicly available code changes performed by developers. We rely on Deep Learning, and in particular on Neural Machine Translation (NMT), to train models able to learn code change patterns and apply them to novel, unseen, source code. We present three case studies where we instantiate this basic intuition in the context of Mutation Testing, Automated Program Repair (or Bug Fixing), and more generally, learning code changes.
In our first project, we tackle the problem of generating source code mutants for Mutation Testing, a strategy used to evaluate the quality of a test suite, which involves the creation of modified versions of the tested code (i.e., mutants). Classic approaches rely on well-defined mutation operators to generate the code variants, by applying these operators in all, or random, possible locations in the code. These operators are intended to mimic usual program mistakes and are defined by human experts which usually derive them by observation of real bugs, own past experience and knowledge of the programming constructs and errors. We propose a novel approach to automatically learn mutants from faults in real programs, by observing that a buggy code can arguably represent the perfect mutant for the corresponding fixed code. First, we mine bug fixing commits from thousands of publicly available GitHub repositories, then we process these bug fix changes using fine-grained Abstract Syntax Tree (AST) differencing tool and abstracting the source code. Next, we perform unsupervised clustering to group together bug fixes that performed similar AST operations. Finally, we train and evaluate Encoder-Decoder models to translate fixed code into buggy code (i.e., the mutated code). Starting from code fixed by developers in the context of a bug-fix, our empirical evaluation showed that our models are able to predict mutants that resemble original fixed bugs in between 9% and 45% of the cases (depending on the model). Moreover, over 98% of the automatically generated mutants are lexically and syntactically correct.
In our second project, we aim to learn code transformations for the purpose of Automated Program Repair (APR). The latter represents one of the most challenging research problem in Software Engineering, whose goal is to automatically fix bugs without developers? intervention. Similarly to Mutation Testing, existing APR techniques are mostly based on hard-coded rules and
operators ? defined by human experts ? used to generate potential patches for a buggy code. Conversely, we aim to automatically learn how to fig bugs by observing thousands of real bug-fixes performed by developers in the wild. In particular, we train an NMT model to translate buggy code into fixed code. A key difference with the previous project is the fact that we may need to generate many different translations in order to obtain a correct patch for the buggy code. To do so, we employ Beam Search during inference, which allows the model to generate up to (and more) 50 different translations of a buggy code. In our empirical investigation we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9-50% of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different AST operations and generate candidate patches in a split second.
Finally, in our third project we push our novel technique to the limits and enlarge the scope to consider not only bug-fixing activities, but any type of meaningful code changes performed by developers. Rather than learning from any code modification applied in commits, we focus on accepted and merged code changes that undergone a Pull Request (PR) process. This allows us to focus on complete, meaningful code changes that have been reviewed by other developers and accepted on the master branch. Our goal is to quantitatively and qualitatively investigate the ability of an NMT model to learn how to automatically apply code changes implemented by developers. We train and experiment with the NMT model on a set of 236k pairs of code components before and after the implementation of the changes in PRs. The quantitative results show that NMT can automatically replicate the changes implemented by developers during pull requests in up to 36% of the cases. We extensively and qualitatively investigate these cases in order to build a taxonomy of code transformations that the model was able to automatically learn. The taxonomy shows that NMT can replicate a wide variety of meaningful code changes, especially refactorings and bug-fixing activities. Our results pave the way to novel research in the area of DL on code, such as the automatic learning and applications of refactoring.
Bio:
Michele Tufano is a Ph.D. candidate at William & Mary. He is a member of the SEMERU Research Group and is advised by Dr. Denys Poshyvanyk. He received a Bachelors degree in Computer Science (cum laude) from the University of Salerno in 2012 and a Master's degree in Computer Science (cum laude) in Computer Science from the University of Salerno in 2014. His research interests include the application of Deep Learning techniques to Software Engineering tasks such as Automated Program Repair, Software Testing, Maintenance and Evolution. He also worked on Android Testing, Mining Software Repositories, Code Quality and Software Building.