See also Policing, Predictive policing.
[To read] Zeng, J., Ustun, B., & Rudin, C. (2017). Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society), 180(3), 689–722. doi:10.1111/rssa.12227
Uses a new method, “supersparse linear integer models”, to produce scoring rules that classify recidivism as accurately as the fanciest uninterpretable machine learning methods, while still giving interpretable results.
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of Innovations in Theoretical Computer Science. https://arxiv.org/abs/1609.05807
Defines three fairness criteria:
Calibration within groups
Balance for the negative class: average scores of people who do not recidivate should not differ between groups
Balance for the positive class: average scores of people who do recidivate should be the same between groups
Then proves that these criteria can only be simultaneously achieved when either (a) perfect prediction is possible or (b) the groups have equal base rates.
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. doi:10.1145/3097983.3098095
Frames the problem as constrained optimization: maximize public safety, subject to fairness constraints. “We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants.” Using Broward County recidivism data and a simple logistic regression model for predicting recidivism, they show that enforcing statistical parity (“an equal proportion of defendants are detained in each race group”) costs a 9% increase in violent crime, and enforcing conditional parity (conditioning on “legitimate” risk factors like prior convictions) still costs a 4% increase.
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. doi:10.1126/sciadv.aao5580
Evaluates a common recidivism risk prediction tool, COMPAS, on data from 7,000 arrests in Florida released by ProPublica. Compares COMPAS’s accuracy to assessments from Mechanical Turkers given “the defendant’s sex, age, and previous criminal history, but not their race”, pooled by “a majority rules criterion”; the Turkers edged out COMPAS slightly. Also shows a simple logistic regression gives similar accuracy, as does an SVM. The COMPAS makers responded to claim COMPAS was mischaracterized (it uses six risk factors, not 137) and to accuse the paper of overfitting, though the paper claims to use cross validation.