Algorithmic fairness conventionally refers to the use of algorithms to make decisions about people: the bias that may arise when data is used to decide who gets loans, who gets jobs, who gets bail, and so on. Fairness in these applications is seen as an ethical requirement, and the challenge is adequately defining “fairness” and finding methods to build models satisfying those definitions.
But I think many of the same fairness concerns can apply to much more mundane algorithms. For instance, algorithmic feed filtering on sites like reddit and Twitter results in some posts being more widely viewed than others; by optimizing for user engagement, these algorithms reward content that stimulates certain kind of engagement; and because different political opinions tend to stimulate different kinds of engagement, this naturally promotes certain political views. Or, in other words, the medium influences the message, even if the designers of the medium (software engineers and data scientists) have no political motive whatsoever, or indeed any awareness that their work has such effects.
That’s not to say that Twitter poses the same ethics problems as an algorithm making bail decisions, just that some of the same tools for defining and studying fairness may apply to it.
See also Privacy, Algorithmic due process, Machine learning and law, Predicting recidivism.
Solon Barocas, Moritz Hardt, and Arvind Narayanan (2020+). Fairness and machine learning: Limitations and opportunities, a work-in-progress textbook.
Keyes, O., Hutson, J., & Durbin, M. (2019). A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry. In CHI ea. Association for Computing Machinery. doi:10.1145/3290607.3310433
On why fairness (and accountability, and transparency) cannot be your only ethical considerations in algorithm design.
Plečko, D., & Bareinboim, E. (2024). Causal fairness analysis: A causal toolkit for fair machine learning. Foundations and Trends in Machine Learning, 17(3), 304–589. doi:10.1561/2200000106. https://arxiv.org/abs/2207.11385
A causal interpretation of the legal concepts of disparate treatment and disparate impact, using counterfactuals to define them. (For instance, disparate treatment implies a direct causal arrow between the protected attribute and the outcome; disparate impact means there is a path, even an indirect one.) Explores criteria for fairness measures to be admissible and tries to show how you might estimate them. This is essentially a book (289 pages!), not an article, and explores the causal view of fairness in depth.
Hu, L. (2023). What is “race” in algorithmic discrimination on the basis of race? Journal of Moral Philosophy, 21(1–2), 1–26. doi:10.1163/17455243-20234369
A philosopher’s argument against definitions of fairness based on causal effects of the protected attributes, such as race. Namely: They require a clear definition of what the protected attribute is – e.g. a clear definition of race. And race is notoriously difficult to define. Argues for a “thick constructivist” definition that includes not just phenotype but consequences of race’s role in society, “things like relations of privilege tied to whiteness and relations of subordination tied to Blackness.” Because these features are essential to race, “an algorithm that includes the “Race” feature in its machine learning process and one that does not but does include features that exhibit non-accidental correlations tracking social facts constitutive of being raced R may both be discriminating on the basis of race R.”
I believe the causal interpretation of this argument is that certain effects of race – involving social privilege and so on – should not be considered simply mediators between race and any other outcome, and hence legitimate to include in a decisionmaking process, but part of race itself and hence discriminatory to include. The normative question, then, is what features are part of race.
Huszár, F., Ktena, S. I., O’Brien, C., Belli, L., Schlaikjer, A., & Hardt, M. (2021). Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences, 119(1). doi:10.1073/pnas.2025334119
“Our results reveal a remarkably consistent trend: In six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favors right-leaning news sources. We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones; contrary to prevailing public belief, we did not find evidence to support this hypothesis.”
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W. (2020). Auditing radicalization pathways on YouTube. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 131–141). ACM. doi:10.1145/3351095.3372879
Studies alt-right YouTube channels and videos, how they appear in recommendations, and which users comment on their videos. Finds evidence that some users gradually migrate to more extreme videos, and that YouTube recommendations include alt-right and extremist videos, although the data isn’t sufficient to say they are disproportionately recommended.