Waterloo Artificial Intelligence Institute
Waterloo Cybersecurity and Privacy Institute

Previous Talks

View a playlist of recordings of our previous talks.

Chaowei Xiao
Chaowei Xiao
University of Michigan, NVIDIA Research
Date: Monday, November 30th, 2020 @ 2PM.
Talk Title: Deep Learning in Adversarial Environments
Bio: Chaowei Xiao is a research scientist at NVIDIA Research. He earned his Ph.D. at the University of Michigan, working with Professor Mingyan Liu. His research interests lie at the intersection of computer security, privacy, and machine learning. His works have been featured in multiple media outlets, including Wired, Fortune, IEEE SPECTRUM. One of his research outputs is now on display at the Science Museum in London. He has received the best paper award at Mobicom 2014.

Date: Monday, December 14th, 2020 @ 4PM.
Talk Title: A Tale of Two Applications of Verification in Machine Learning
Abstract: The rise of machine learning, particularly in the form of deep learning, has created a qualitative shift in our conception of what software is and what software can accomplish. But, of course, it’s not all rainbows and butterflies. Researchers have been hard at work trying to understand the fragility of the machine-learning pipeline: from training to inference, small changes to the input can result in radical changes to the output, which can lead to security, safety, as well as ethical problems. In this talk, I will show how new techniques from software verification can help us reason about, and ensure, robustness of machine-learning techniques against training-time (poisoning) and test-time (adversarial-example) attacks.
This talk is based on joint work with Yuhao Zhang, Samuel Drews, and Loris D’Antoni.
Bio: Aws Albarghouthi is an assistant professor at the University of Wisconsin-Madison. He studies automated synthesis and verification of programs. He received his PhD from the University of Toronto in 2015. He has received a number of best-paper awards for his work (at FSE, UIST, and FAST), an NSF CAREER award, a Google Faculty Research Award, and Facebook Research Awards. Aws is very excited about his virtual visit to Waterloo.

Date: Monday, December 21st, 2020 @ 4PM.
Talk Title: Understanding and Repairing Deep Neural Networks
Abstract: Deep neural networks (DNNs) have been successfully applied to a wide variety of problems, including image recognition, natural-language processing, medical diagnosis, and self-driving cars. As the accuracy of DNNs has increased so has their complexity and size, making the outputs of such models difficult to meaningfully interpret. The talk describes a new symbolic representation for DNNs that allowed us to exactly compute the integrated gradients, a state-of-the-art network attribution method that until now has only been approximated. Moreover, DNNs are far from infallible, and mistakes made by DNNs have led to loss of life, motivating research on verification and testing to find mistakes in DNNs. In contrast, the talk describes techniques and tools for repairing a trained DNN once a mistake has been discovered. We present Provable Repair of DNNs, which computes a minimal change to the parameters of a trained DNN to correct its behavior according to a given specification, and ensures that the patch is provably effective, generalizing, local, and efficient.
Aditya Thakur's Bio: Aditya Thakur is an assistant professor of Computer Science at the University of California, Davis. He received his Ph.D. from the University of Wisconsin, Madison, and has held positions at Google, Microsoft Research, and the University of California, Berkeley. His research interests include programming languages, machine learning, formal methods, and software engineering. He was the recipient of the Facebook Probability and Programming Research Award 2019 and 2020, and Facebook Testing and Verification Research Award 2018.
Matthew Ali Sotoudeh's Bio: Matthew Sotoudeh is a senior undergraduate student at the University of California, Davis, majoring in Computer Science and Mathematics, where he is a Regents Scholar.

Date: Monday, January 11th, 2021 @ 1PM.
Talk Title: What does it mean for ML to be trustworthy?
Abstract: The attack surface of machine learning is large: training data can be poisoned, predictions manipulated using adversarial examples, models exploited to reveal sensitive information contained in training data, etc. This is in large parts due to the absence of security and privacy considerations in the design of ML algorithms. Yet, adversaries have clear incentives to target these systems. Thus, there is a need to ensure that computer systems that rely on ML are trustworthy. Fortunately, we are at a turning point where ML is still being adopted, which creates a rare opportunity to address the shortcomings of the technology before it is widely deployed. Designing secure ML requires that we have a solid understanding as to what we expect legitimate model behavior to look like. We structure our discussion around two directions, which we believe are likely to lead to significant progress. The first encompasses a spectrum of approaches to verification and admission control, which is a prerequisite to enable fail-safe defaults in machine learning systems. The second pursues formal frameworks for security and privacy in machine learning, which we argue should strive to align machine learning goals such as generalization with security and privacy desiderata like robustness or privacy. We illustrate these directions with recent work on adversarial examples, privacy-preserving ML, machine unlearning, and deepfakes.
Bio: Nicolas Papernot is an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto. He is also a faculty member at the Vector Institute where he holds a Canada CIFAR AI Chair, and a faculty affiliate at the Schwartz Reisman Institute. His research interests span the security and privacy of machine learning. Nicolas is a Connaught Researcher and was previously a Google PhD Fellow. His work on differentially private machine learning received a best paper award at ICLR 2017. He serves on the program committees of several conferences including ACM CCS, IEEE S&P, and USENIX Security. He is also the co-author of CleverHans and TensorFlow Privacy, two open-source libraries widely adopted in the technical community to benchmark the security and privacy of machine learning. He earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel. Upon graduating, he spent a year as a research scientist at Google Brain in Úlfar Erlingsson's group.

Date: Monday, January 18th, 2021 @ 4PM.
Talk Title: Robustness in unsupervised and supervised machine learning
Abstract: Recently, the need for robust machine learning algorithms has become apparent. Whether due to errors in data collection, model misspecification, or adversarial attacks, contaminated datasets arise in many areas. This is an issue, as existing methods appear to be quite brittle to small amounts of errors. Even more worryingly, these models are being deployed in many security-critical settings, such as self-driving cars, where reliability is an absolute must. In this talk, I will describe a line of work in which we provide provable guarantees for robust machine learning in several fundamental settings. I’ll begin by discussing the problem of robust estimation of mean and covariance of a Gaussian distribution, and how to relax this to distributions with weaker assumptions on the moments. I will then describe how these methods can be used to “robustify” supervised learning algorithms by applying robust mean estimation algorithms to the gradients of the dataset. While theoretically sound, the algorithms are also realizable and efficient, and I will present experimental results on both synthetic and real-world data. Based on joint works with Ilias Diakonikolas, Daniel M. Kane, Jerry Li, Ankur Moitra, Jacob Steinhardt, and Alistair Stewart.
Bio: Gautam Kamath is an Assistant Professor at the David R. Cheriton School of Computer Science at the University of Waterloo. He has a B.S. in Computer Science and Electrical and Computer Engineering from Cornell University, and an M.S. and Ph.D. in Computer Science from the Massachusetts Institute of Technology. His research interests lie in methods for statistics and machine learning, with a focus on challenges which arise in modern data analysis, including data privacy and robustness. He was a Microsoft Research Fellow, as a part of the Simons-Berkeley Research Fellowship Program at the Simons Institute for the Theory of Computing. He is recipient of an NSERC Discovery Accelerator Supplement, and was awarded the Best Student Presentation Award at the ACM Symposium on Theory of Computing in 2012.

Date: Monday, January 25th @4PM EST.
Talk Title: Fairness in ML

Date: Monday, February 1st @4PM EST.
Talk Title: Clustering - what both practitioners and theoreticians are doing wrong.

Date: Monday, February 8th @4PM EST.
Talk Title: A learning problem that is independent of the set theory ZFC axioms

Bio: Shai Ben-David earned his Ph.D. in mathematics from the Hebrew University in Jerusalem and has been a professor of computer science at the Technion (Israel Institute of Technology). He held visiting faculty positions at the Australian National University, Cornell University, ETH Zurich, TTI Chicago and the Simons Institute at Berkeley. Since 2004 Shai has been a professor at the David Cheriton School of Computer Science at the University of Waterloo. Since 2019 he is also a faculty member at the Toronto Vector institute. In recent years his research focus turned to machine learning theory. Among his notable contributions in that field are pioneering steps in the analysis of domain adaptation, learnability of real-valued functions, and change detection in streaming data. Shai has also made fundamental contributions to the theory of clustering and published seminal works on average-case complexity, competitive analysis of algorithms and alternatives to worst-case complexity.
Highlights:
  • President of the Association for Computational Learning Theory (2009-2012).
  • Program chair for the major machine learning theory conferences (COLT and ALT, and frequent area chair for ICML, NIPS and AISTATS).
  • Co-authored the textbook “Understanding machine learning: from theory to algorithms” (Cambridge University Press 2015).
  • Co-author of Best Paper awards, most recently in NIPS 2018.
  • CIFAR AI Chair.
  • University Research Chair at U Waterloo

Date: Monday, February 15th, 2021 @ 4PM EST.
Talk Title: Certified Artificial Intelligence
Abstract: Despite surpassing human-level performance in many challenging domains such as vision, planning, and natural sciences, there remain concerns about the fragility of modern data-driven AI systems when applied in the real-world, which poses a threat to their wider adoption. Indeed, obtaining AI systems theoretically guaranteed to be safe and reliable is a fundamental challenge of critical importance. In this talk, Gagandeep will present a path towards addressing this fundamental problem. Specifically, He will introduce new mathematical methods combining convex optimization with the classical abstract interpretation framework that enables scalable and precise logical reasoning about the (potentially infinite number of) behaviors of an AI system (e.g., a deep neural network). He will then show how these methods enable both the creation of state-of-the-art automated verifiers for modern AI systems and the design of new provable training techniques. Finally, He will outline several promising future research directions.
Bio: Gagandeep Singh will be starting as a tenure-track Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC) from Fall 2021. He is currently working with VMWare Research. His research interests lie at the intersection of artificial intelligence (AI) and programming languages. His long term goal is to design end-to-end automated formal reasoning tools for real-world systems with both software and AI components such as autonomous vehicles, robots, and AI-powered healthcare devices. Previously, he obtained a Ph.D. in Computer Science from ETH Zurich in 2020, where he designed new scalable and precise automated reasoning methods and tools for programs and deep neural networks.
Date: Monday, February 22nd, 2021 @ 4PM EST.
Talk Title: Towards ML for Quantification in SMT
Abstract: We aim to use ML to guide quantifier instantiation in SMT solving. What is the motivation for this? Quantifiers are indispensable in logic-based modeling, yet infamously hard to reason about. It can already be shown that exponential speedups are obtained by clever instantiation of quantifiers in finite domains. We will look at this in the context of Quantified Boolean Formulas (QBF). Further, we will look at the use of ML in the context of finite first-order models. The second part of the talk will outline how ML can help in general SMT quantifier instantiation and what ML methods are applicable in this domain.
Bio: Mikoláš Janota was born in Prague, when it was still Czechoslovakia, where he did his Masters at the Charles University and where he also developed a passion for solving and applying logic-based reasoning. He did his Ph.D. at University College Dublin, Ireland, where he worked on software verification and product configuration followed by postdoctoral positions in Portugal and Microsoft Research, Cambridge UK. He held the position of assistant lecturer at the University of Lisbon, Portugal until last year. Last September he moved back to the Czech Republic accepting the position of a senior researcher at Czech Technical University.