[PAST EVENT] Recent Advancements in Stochastic, Noisy, & Derivative-Free Optimization Method for Machine Learning

February 4, 2021
12pm - 8pm
Location
Zoom

Please join us on Zoom:

With the advent of big data and machine learning, there has been a surge of research in nonlinear optimization methods, particularly in relation to the training of machine learning systems. The scale of these problems have necessitated the use of inaccurate gradient estimates as a result of sampling data and/or other sources of noise or error. It is therefore imperative to improve existing optimization methods and develop new algorithms that are not only efficient, scalable, and adaptive, but also noise-tolerant and amenable to parallelism. Motivated by this set of applications, I will provide an overview of three threads of research that I conducted during my Ph.D.: (1) stochastic optimization via progressive batching quasi-Newton methods for machine learning; (2) noise-tolerant quasi-Newton methods for more generic noise structures; and (3) finite-difference methods for (noisy) derivative-free optimization. Each topic is substantiated by convergence theory and experimental evidence, demonstrating its potential usefulness for practical applications.

Hao-Jun Michael Shi

Ph.D. Candidate

Department of Industrial Engineering and Management Sciences

Northwestern University

Website: http://users.iems.northwestern.edu/~hjmshi/