[PAST EVENT] Trustworthy Machine Learning From Untrusted Models
Speaker: Dr. Ting Wang, Lehigh University
Title: Trustworthy Machine Learning From Untrusted Models
Today's machine learning (ML) systems are no longer built from scratch, but are composed by an array of primitive models. This paradigm shift has significantly simplified the development cycles of ML systems. Yet, as most primitive models are contributed by untrusted third parties, their lack of regulation entails profound security implications. In this talk, I will demonstrate that malicious primitive models pose immense threats to the security of ML systems. I will present a general class of model-reuse attacks wherein malicious models, once integrated into ML systems, are able to fully control the behaviors of host systems. I will then discuss potential countermeasures against such threats, which enforce security protection throughout the life-cycles of ML systems. I will describe two effective mitigation tools, one performing offline model checking, which determines whether a third-party model is bug-free, and the other performing runtime system monitoring, which detects and repairs abnormal system behaviors. Through this talk, I hope to increase the awareness of the issues of ML security and promote more principled practice of building and operating ML systems.
Dr. Ting Wang is currently an assistant professor in the Computer Science and Engineering Department at Lehigh University. Prior to joining Lehigh, he obtained his doctoral degree from Georgia Institute of Technology. Dr. Wang conducts research at the intersection of machine learning and privacy & security. His ongoing work focuses on making machine learning systems more practically usable through mitigating security vulnerabilities, enhancing privacy awareness, and increasing decision-making transparency. Dr. Wang is a recipient of the NSF CAREER Award and the IBM Research Innovation Award. His work has received multiple best paper awards from venues including IEEE CNS and ACM AISec.