Arts & Sciences Events
[PAST EVENT] Kevin Moran, Computer Science - Ph.D. Dissertation Proposal
Location
Chancellors Hall (formerly Tyler Hall), Room 114300 James Blair Dr
Williamsburg, VA 23185Map this location
Abstract:
Mobile devices such as smartphones and tablets have become ubiquitous in today's computing landscape. These devices have ushered in entirely new populations of users, and mobile operating systems are now outpacing more traditional "desktop" systems in terms of market share. The applications that run on these mobile devices (often referred to as "apps") have become a primary means of computing for millions of users and, as such, have garnered immense developer interest. These apps allow for unique, personal software experiences through touch-based UIs and a complex assortment of sensors. However, designing and implementing high quality mobile apps can be a difficult process. In this dissertation proposal, we present three novel approaches for automating and improving current software design and testing practices for mobile apps.
Our first project aims to help improve the quality of graphical user interfaces (GUIs) for mobile apps by automatically detecting instances where a GUI was not implemented to its intended specifications. The inception of a mobile app typically takes the form of a mock-up of the GUI, represented as a static image (i.e., a screenshot) delineating the proper layout and style of GUI widgets that satisfy requirements. The process of creating these mock-ups is typically carried out by an independent team of designers using professional photo editing software. Following this initial mock-up process, the design artifacts are then handed off to developers whose goal is to accurately implement the GUI in code. Given the sizable abstraction gap between mock-ups and code, developers often introduce mistakes related to the GUI that can negatively impact an app's success in highly competitive marketplaces. To help improve this process we introduce an approach that resolves GUI-related information from both implemented apps and mock-ups and uses computer vision techniques to identify common errors in the implementations of mobile GUIs. We instantiated this approach for Android in a tool called GVT and carried out both a controlled empirical evaluation with open-source applications as well as an industrial evaluation with designers and developers from Huawei, a major software and telecommunications company. The results show that GVT solves an important, difficult, and highly practical problem with remarkable efficiency and accuracy and is both useful and scalable from the point of view of industrial practitioners and academic researchers. GVT is currently in active use by over one-thousand designers and developers at Huawei to improve the quality of mobile apps.
In our second project, we aim to completely automate the process of translating a mock-up of a mobile app's GUI to code through novel applications of machine learning techniques. Thus, we present an approach that enables machine-driven, accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, large-scale software repository mining, automated dynamic analysis, and deep convolutional neural networks (CNNs) are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called ReDraw. Our evaluation illustrates that ReDraw achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners from Google, Facebook, and Huawei illustrate ReDraw's potential to improve real design and development workflows.
Finally, our third project aims to support developers in testing tasks for mobile apps through an automated crash discovery and reporting approach called CrashScope. Our approach explores a given Android app using systematic input generation with the intrinsic goal of triggering crashes. The GUI-based input generation engine is driven by a combination of static and dynamic analyses that target common, empirically derived root causes of crashes in Android apps. When a crash is detected, CrashScope generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CrashScope's effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CrashScope performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CrashScope?s reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.
Bio:
Kevin Moran is a Ph.D. candidate at William & Mary. He is a member of the SEMERU Research Group and is advised by Dr. Denys Poshyvanyk. He received a B.A. in Physics from the College of the Holy Cross in 2013 and received his M.S. in Computer Science from William & Mary in 2015. His main research interests include software engineering, maintenance, and evolution with a focus on mobile platforms. Additionally, he explores applications of data mining and machine learning to software engineering problems.