See also Algorithmic fairness, Interpretable and explainable models, Privacy.
Kate Crawford and Jason Schulz, “Big data and due process: Toward a framework to redress predictive privacy harms”, 55 Boston College Law Review 93 (2014). http://lawdigitalcommons.bc.edu/bclr/vol55/iss1/4/
Proposes “a right to procedural data due process” while adorably capitalizing “Big Data”. Points out the mismatch between current privacy law and predictive methods: in the famous Target story, where Target guessed a customer was pregnant based on purchasing patterns, sensitive information can be inferred instead of requested from the user. This connects with Solove’s conception of privacy: companies and governments can make decisions using inferred private data, so consumers and citizens should have a right to examine the data and models justifying the decisions and appeal to have them corrected if necessary. For some decisions (credit checks, job offers, etc.) the consumer has an obvious opportunity to seek redress; for others (ad targeting) there’s no obvious moment when a decision has been made about them, and an agency like the FTC would need to exercise oversight instead.
This right would be very interesting to see applied to typical Silicon Valley startups, which are seat-of-the-pants operations unlikely to want to slow down long enough for proper due process.
Danielle Keats Citron and Frank Pasquale, “The Scored Society: Due Process for Automated Predictions”, 89 Washington Law Review 1 (2014). https://ssrn.com/abstract=2376209
Gives examples of real harms from predictive scores, including a credit card company adjusting customer credit risk “because they used their cards to pay for marriage counseling, therapy, or tire-repair services”. Through credit scores as an example, explores the need for due process and regulatory oversight, including a right to inspect data held by companies about you, dispute inaccurate data, and review predictive algorithms. Argues that “scoring systems should be subject to licensing and audit requirements when they enter critical settings like employment, insurance, and health care”, and the FTC should be empowered to review scoring algorithms. Companies should provide tools so individuals can see how their score would change under various conditions, something like the explainability requirements explored by Selbst and Barocas (2018). (See Interpretable and explainable models.)
Bryce Goodman and Seth Flaxman, “European Union regulations on algorithmic decision-making and a ‘right to explanation’”, ICML 2016. https://arxiv.org/abs/1606.08813
Summarizes the EU General Data Protection Regulation, scheduled to become law in 2018, which adds a “right to explanation”: people profiled by data have a right to “meaningful information about the logic involved.” This doesn’t go so far as to create due process rights, but does suggest challenges for users of machine learning techniques in business: how do you explain the output of a random forest to an arbitrary person, who may have no technical knowledge at all? Can you justify its decisions?