CS Distinguished Talk: Hridesh Rajan, Tulane University

October 2, 2024
12pm - 1pm
Location
McGlothlin-Street Hall, McGlothlin 020
251 Jamestown Rd
Williamsburg, VA 23185Map this location
Access & Features
  • Open to the public

Title: MORE MODULAR DEEP LEARNING.

A class of machine learning algorithms known as deep learning that has received much attention in academia and industry. Deep learning has a large number of important societal applications, from self-driving cars to question-answering systems such as Siri and Alexa. A deep learning algorithm uses multiple layers of transformation functions to convert inputs to outputs, each layer learning higher-level of abstractions in the data successively. The availability of large datasets has made it feasible to train deep learning models. Since the layers are organized in the form of a network, such models are also referred to as deep neural networks (DNN). While the jury is still out on the impact of deep learning on the overall understanding of software's behavior, a significant uptick in its usage and applications in wide-ranging areas and safety-critical systems, e.g., autonomous driving, aviation system, medical analysis, etc., combine to warrant research on software engineering practices in the presence of deep learning. One challenge is to enable the reuse and replacement of the parts of a DNN that has the potential to make DNN development more reliable. This talk will describe a comprehensive approach to systematically investigate the decomposition of deep neural networks into modules to enable reuse, replacement, and independent evolution of those modules. A module is an independent part of a software system that can be tested, validated, or utilized without a major change to the rest of the system. Allowing the reuse of DNN modules is expected to reduce energy- and data-intensive training efforts to construct DNN models. Allowing replacement is expected to help replace faulty functionality in DNN models without needing costly retraining steps. Our preliminary work has shown that it is possible to decompose fully connected neural networks and CNN models into modules and conceptualize the notion of modules. A serious problem facing the current software development workforce is that deep learning is widely utilized in our software systems, but scientists and practitioners do not yet have a clear handle on critical problems such as explainability of DNN models, DNN reuse, replacement, independent testing, and independent development. There was no apparent need to investigate the notions of modularity as neural network models trained before the deep learning era were mostly small, trained on small datasets, and were mostly used as experimental features. The notion of DNN modules developed by our work is helping make significant advances on a number of open challenges in this area. DNN modules enable the reuse of already trained DNN modules in another context. Viewing a DNN as a composition of DNN modules instead of a black box enhances the explainability of a DNN's behavior. More modular deep learning will thus have a large positive impact on the productivity of these programmers, the understandability and maintainability of the DNN models that they deploy, and the scalability and correctness of software systems that they produce.

 Bio 

Hridesh Rajan is the Dean of the School of Science and Engineering at Tulane, overseeing a wide range of departments and programs. Before joining Tulane, he was the Kingland Professor at Iowa State University, where he served as the Department Chair of Computer Science. He also held the role of founding Professor-in-Charge of Data Science Programs from 2017 to 2019, during which he established the annual Midwest Big Data Summer School and led several key data science educational initiatives.

As an academic, Rajan is well-regarded for his contributions to Software Engineering and Programming Languages. He is the creator of the Ptolemy programming language, which improved modular reasoning about crosscutting concerns, and the Boa programming language, which simplifies data-driven software engineering. His research has been recognized with numerous awards, including the NSF CAREER award, the LAS Early Achievement in Research Award, and the Facebook Probability and Programming Award. Rajan’s academic influence extends through his editorial work with IEEE Transactions on Software Engineering and ACM SIGSOFT Software Engineering Notes, and his advisory role with the Proceedings of the ACM on Programming Languages.



Sponsored by: Computer Science