Performative Prediction
When predictive models support decisions they can influence the outcome they aim to predict. We call such predictions performative; the prediction influences the target. Performativity is a well-studied phenomenon in policy-making that has so far been neglected in supervised learning. When ignored, performativity surfaces as undesirable distribution shift, routinely addressed with retraining.
In this talk, I will describe a risk minimization framework for performative prediction bringing together concepts from statistics, game theory, and causality. A new element is an equilibrium notion called performative stability. Performative stability implies that the predictions are calibrated not against past outcomes, but against the future outcomes that manifest from acting on the prediction.
I will then discuss recent results on performative prediction including necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.
Joint work with Juan C. Perdomo, Tijana Zrnic, and Celestine Mendler-Dünner.
Bio: Moritz Hardt is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Hardt investigates algorithms and machine learning with a focus on reliability, validity, and societal impact. After obtaining a PhD in Computer Science from Princeton University, he held positions at IBM Research Almaden, Google Research and Google Brain. Hardt is a co-founder of the Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and a co-author of the forthcoming textbook "Fairness and Machine Learning". He has received an NSF CAREER award, a Sloan fellowship, and best paper awards at ICML 2018 and ICLR 2017.