[PAST EVENT] Zeyi Tao, Computer Science - Dissertation Defense

May 17, 2022
1pm - 3pm
Location
McGlothlin-Street Hall, zoom
251 Jamestown Rd
Williamsburg, VA 23185Map this location
Tao Zeyi

Abstract:
Recent advances in Artificial Intelligence (AI) are characterized by ever-increasing datasets and rapid growth of model complexity. Many modern machine learning models, especially deep neural networks (DNNs), cannot be efficiently carried out by a single machine. Hence, distributed optimization and inference have been widely adopted to tackle large-scale machine learning problems. Meanwhile, quantum computers that process computational tasks exponentially faster than classical machines offer an alternative solution for resource-intensive deep learning.
 
However, there are two obstacles that hinder us from building large-scale DNNs on the distributed systems and quantum computers. First, when distributed systems scale to many nodes, the training process is slowed down by high communication costs, including frequent training data transmission and gradient exchange. Second, such applications are prevented from being widely used in academia and industry by high computation costs. These costs include training and inference for DNNs deployed on resource-constrained devices. They also includes optimization for quantum neutral networks (QNNs). To circumvent these obstacles, this dissertation focuses on streamlining the training and inference of classical DNNs and QNNs.
To reduce the communication cost of distributed training, we explore the theoretical foundations of two mainstream distributed schemas: classical distributed learning and federated learning (FL). Based on these explorations, we propose two novel optimization algorithms that effectively reduce the communication cost without model performance losses. For classical distributed learning, we propose communication-efficient stochastic gradient descent (CE-SGD) to downsize the stochastic gradient used for synchronization. For federated learning, we propose a preconditioned federated optimization algorithm (PreFed) that utilizes the objective function’s geometric information to accelerate the federated training process.
To reduce the computation cost for DNN’s inference on portable devices, we use the knowledge distillation technique to propose an efficient and robust Cloud-based deep learning. It enables the Cloud server to generate high-quality and lightweight models, allowing small devices to execute the learning tasks locally. In addition, we propose a computation-and-communication-efficient federated neural architecture search (E-FedNAS) algorithm to automatically find a model structure fitting for the unseen local data. E-FedNAS progressively fine-tunes the model structure and creates the ideal model in one path, which is suitable for small devices.

Bio:
Zeyi Tao is a Ph.D. candidate in the Department of Computer Science at William & Mary advised by Prof. Qun Li. His research interests include machine learning, deep learning, edge computing, online convex, and non-convex optimization, especially the computation and communication efficiency problems in distributed learning systems such as classic distributed machine learning and federated learning. He received his Master of Science degree from William & Mary in 2017.