This was part of
The Multifaceted Complexity of Machine Learning
Performative Prediction
Moritz Hardt, University of California, Berkeley
Tuesday, April 13, 2021
Abstract: When predictive models support decisions they can influence the outcome they aim to predict. We call such predictions performative; the prediction influences the target. Performativity is a well-studied phenomenon in policy-making that has so far been neglected in supervised learning. When ignored, performativity surfaces as undesirable distribution shift, routinely addressed with retraining.
In this talk, I will describe a risk minimization framework for performative prediction bringing together concepts from statistics, game theory, and causality. A new element is an equilibrium notion called performative stability. Performative stability implies that the predictions are calibrated not against past outcomes, but against the future outcomes that manifest from acting on the prediction.
I will then discuss recent results on performative prediction including necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.