Waterloo Artificial Intelligence Institute

Waterloo ML+Logic Colloquium

Waterloo Cybersecurity and Privacy Institute

Upcoming Talks

Join our group in order to receive emails with the link for the talks.

Date Talk Titles
January 25th @4PM EST Fairness in ML
February 1st @4PM EST Clustering
February 8th @4PM EST Independence from ZFC result

Bio: Shai Ben-David earned his Ph.D. in mathematics from the Hebrew University in Jerusalem and has been a professor of computer science at the Technion (Israel Institute of Technology). He held visiting faculty positions at the Australian National University, Cornell University, ETH Zurich, TTI Chicago and the Simons Institute at Berkeley. Since 2004 Shai has been a professor at the David Cheriton School of Computer Science at the University of Waterloo. Since 2019 he is also a faculty member at the Toronto Vector institute. In recent years his research focus turned to machine learning theory. Among his notable contributions in that field are pioneering steps in the analysis of domain adaptation, learnability of real-valued functions, and change detection in streaming data. Shai has also made fundamental contributions to the theory of clustering and published seminal works on average-case complexity, competitive analysis of algorithms and alternatives to worst-case complexity.
Highlights:
  • President of the Association for Computational Learning Theory (2009-2012).
  • Program chair for the major machine learning theory conferences (COLT and ALT, and frequent area chair for ICML, NIPS and AISTATS).
  • Co-authored the textbook “Understanding machine learning: from theory to algorithms” (Cambridge University Press 2015).
  • Co-author of Best Paper awards, most recently in NIPS 2018.
  • CIFAR AI Chair.
  • University Research Chair at U Waterloo

Meeting Link: WebEx
Date: Monday, February 15th, 2021 @ 4PM EST.
Talk Title: Certified Artificial Intelligence
Abstract: Despite surpassing human-level performance in many challenging domains such as vision, planning, and natural sciences, there remain concerns about the fragility of modern data-driven AI systems when applied in the real-world, which poses a threat to their wider adoption. Indeed, obtaining AI systems theoretically guaranteed to be safe and reliable is a fundamental challenge of critical importance. In this talk, Gagandeep will present a path towards addressing this fundamental problem. Specifically, He will introduce new mathematical methods combining convex optimization with the classical abstract interpretation framework that enables scalable and precise logical reasoning about the (potentially infinite number of) behaviors of an AI system (e.g., a deep neural network). He will then show how these methods enable both the creation of state-of-the-art automated verifiers for modern AI systems and the design of new provable training techniques. Finally, He will outline several promising future research directions.
Bio: Gagandeep Singh will be starting as a tenure-track Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC) from Fall 2021. He is currently working with VMWare Research. His research interests lie at the intersection of artificial intelligence (AI) and programming languages. His long term goal is to design end-to-end automated formal reasoning tools for real-world systems with both software and AI components such as autonomous vehicles, robots, and AI-powered healthcare devices. Previously, he obtained a Ph.D. in Computer Science from ETH Zurich in 2020, where he designed new scalable and precise automated reasoning methods and tools for programs and deep neural networks.
Meeting Link: WebEx
Date: Monday, February 22nd, 2021 @ 4PM EST.
Talk Title: TBA
Bio: TBA
Meeting Link: WebEx
Date: Monday, March 8th, 2021 @ 4PM EST.
Talk Title: Formal Verification for Neural Networks and Cyber-Physical Systems with Reachability Methods
Abstract: Neural Networks have made great advances in vision and game-playing in the last decade. Beyond these core areas, however, almost every other field has at least attempted to use neural networks to some extent. Cyber-physical systems (CPS), where computers interact with the physical world, are no different. Although learning-based controllers have long been looked at in control theory, recent interest in combining CPS with neural networks has soared. One important distinction with CPS, however, is that these systems are not confined to the digital world. Behaviors of a neural network in a CPS may manifest in the physical world. CPS are often safety-critical and cannot tolerate any mistakes.
The field of formal verification has traditionally looked at proving properties about finite state machines or software programs. Recent research has also looked at other classes of systems such as CPS models given with differential equations, or even systems that include neural networks. This talk reviews some of our research in these areas, as well as results from the first international verification competition for neural networks (VNN-COMP) which took place last July. For a set of neural networks that performs automated aircraft collision avoidance, our nnenum tool was both the fastest and the only one to successfully prove all of the formal properties in the competition.
Bio: Stanley Bak is a computer scientist investigating the verification of autonomy, cyber-physical systems, and neural networks. He strives to develop practical formal methods that are both scalable and useful, which demands developing new theory, programming efficient tools and building experimental systems.
Stanley Bak received a Bachelor's degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), and a Master's degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009. He completed his PhD from the Department of Computer Science at UIUC in 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. From 2013 to 2018, Stanley was a Research Computer Scientist at the US Air Force Research Lab (AFRL), both in the Information Directorate in Rome, NY, and in the Aerospace Systems Directorate in Dayton, OH. He helped run Safe Sky Analytics, a research consulting company investigating verification and autonomous systems, and performed teaching at Georgetown University before joining Stony Brook University as an assistant professor in Fall 2020.
Meeting Link: WebEx
Date: Monday, April 12th, 2021 @ 4PM EST.
Talk Title: Fairness, Explainability, and Privacy in AI/ML Systems
Abstract: How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as lending and healthcare requiring reliability, safety, and fairness.
We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of fairness-aware ML, explainable AI, and privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Bio: Krishnaram Kenthapadi is a Principal Scientist at Amazon AWS AI, where he leads the fairness, explainability, and privacy initiatives in Amazon AI platform. Until recently, he led similar efforts at LinkedIn AI team, and served as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research Advisory Board. Previously, he was a Researcher at Microsoft Research, where his work resulted in product impact (and Gold Star / Technology Transfer awards), and several publications/patents. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference. He has published 40+ papers, with 2500+ citations and filed 140+ patents (30+ granted). He has presented lectures/tutorials on privacy, fairness, and explainable AI at KDD ’18 ’19, WSDM ’19, WWW ’19 ’20, FAccT ’20, and AAAI ’20.
Meeting Link: WebEx