Machine learning increasingly supports consequential decisions in domains including health, employment, and criminal justice. Consequential decision making is inherently dynamic: Individuals, their outcomes, and entire populations can change and adapt in response to classification. Traditional machine learning, however, fails to account for such dynamic effects.
In this talk, I will highlight three different vignettes of dynamic decision making. The first is about how classification changes populations and how this perspective is essential to questions of fairness in machine learning. The second is about how classification incentivizes individuals to adapt strategically. The third is about how predictions are often performative, that is, they influence the very outcome they aim to predict.
I will end on the contours of a theory that unifies these three settings and its connections to questions in causality, control theory, economics, and sociology.
Biography: Moritz Hardt is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Hardt investigates algorithms and machine learning with a focus on reliability, validity, and societal impact. After obtaining a PhD in Computer Science from Princeton University, he held positions at IBM Research Almaden, Google Research and Google Brain. Hardt is a co-founder of the Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and a co-author of the forthcoming textbook "Fairness and Machine Learning". He has received an NSF CAREER award, a Sloan fellowship, and best paper awards at ICML 2018 and ICLR 2017.