Algorithmic fairness conventionally refers to the use of algorithms to make decisions about people: the bias that may arise when data is used to decide who gets loans, who gets jobs, who gets bail, and so on. Fairness in these applications is seen as an ethical requirement, and the challenge is adequately defining “fairness” and finding methods to build models satisfying those definitions.
But I think many of the same fairness concerns can apply to much more mundane algorithms. For instance, algorithmic feed filtering on sites like reddit and Twitter results in some posts being more widely viewed than others; by optimizing for user engagement, these algorithms reward content that stimulates certain kind of engagement; and because different political opinions tend to stimulate different kinds of engagement, this naturally promotes certain political views. Or, in other words, the medium influences the message, even if the designers of the medium (software engineers and data scientists) have no political motive whatsoever, or indeed any awareness that their work has such effects.
That’s not to say that Twitter poses the same ethics problems as an algorithm making bail decisions, just that some of the same tools for defining and studying fairness may apply to it.
See also Privacy, Algorithmic due process, Machine learning and law, Predicting recidivism.
Solon Barocas, Moritz Hardt, and Arvind Narayanan (2020+). Fairness and machine learning: Limitations and opportunities, a work-in-progress textbook.
Plečko, D., & Bareinboim, E. (2024). Causal fairness analysis: A causal toolkit for fair machine learning. Foundations and Trends in Machine Learning, 17(3), 304–589. doi:10.1561/2200000106
[To read.] A causal interpretation of the legal concepts of disparate treatment and disparate impact, using counterfactuals to define them. (For instance, disparate treatment implies a direct causal arrow between the protected attribute and the outcome; disparate impact means there is a path, even an indirect one.) This is essentially a book (289 pages!), not an article, and explores the causal view of fairness in depth.
Keyes, O., Hutson, J., & Durbin, M. (2019). A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry. In CHI ea. Association for Computing Machinery. doi:10.1145/3290607.3310433
On why fairness (and accountability, and transparency) cannot be your only ethical considerations in algorithm design.
Huszár, F., Ktena, S. I., O’Brien, C., Belli, L., Schlaikjer, A., & Hardt, M. (2021). Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences, 119(1). doi:10.1073/pnas.2025334119
“Our results reveal a remarkably consistent trend: In six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favors right-leaning news sources. We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones; contrary to prevailing public belief, we did not find evidence to support this hypothesis.”
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A. F., & Meira, W. (2020). Auditing radicalization pathways on YouTube. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 131–141). ACM. doi:10.1145/3351095.3372879
Studies alt-right YouTube channels and videos, how they appear in recommendations, and which users comment on their videos. Finds evidence that some users gradually migrate to more extreme videos, and that YouTube recommendations include alt-right and extremist videos, although the data isn’t sufficient to say they are disproportionately recommended.