Supervised Machine Learning Relies On The Fundamental Assumption That Data Is Sampled Iid From The Same Distribution at Train Time and At Test Time. But in Virtually Any Realistic Application, This Assumption Is Unlikely to Hold. This Seminar Will Survey Papers That Study When, How, and Why Learning Algorithms (such As Erm) Can Fail When The Assumption Is Violated. We Will Study Various Failure Modes That Stem From Different Reasons Underlying Why Train and Test Distribution Can Differ, Including# Natural Distribution Drift, Model-induced Distribution Shift, Adversarial Manipulation of Inputs, And Strategic Behavior of Self-interested Users. Learning Outcomes# at The End of The Course The Students Will Be Able To# 1. Identify Different Factors That Can Cause a Learning System To Fail, Rticular in Terms of Distribution Shift. 2. Analyze Existing Learning Frameworks With The Porupse of Exposing Possible Failures. 3. Identidy The Main Assumptions Underlying Currenct Methods, Both Explicit and Implicit, and Determinte Their Implicationsin Terms Of Failures. 4. Propose Adequate Solutions to Such Failures, and Reasons About Their Pros and Cons. 5. Present in a Concise and Critical Manner Current Academic Literatrute From The Field.

Faculty: Computer Science
|Undergraduate Studies |Graduate Studies

Pre-required courses

46195 - Machine Learning or 96411 - Machine Learning 1 or 236756 - Introduction to Machine Learning


Semestrial Information