The increasing use of machine learning in business opens up interesting legal questions. If your ML system makes errors that harm a customer, are you legally liable? Could you be negligent if you deploy an ML system that’s not thoroughly checked, even though the decisions that cause harm are made by a computer and not a human? Who is responsible for the actions of an inscrutable ML system when the unintended consequences of design decisions cannot be easily foreseen?
See also Interpretable and explainable models, Algorithmic fairness, The many-hands problem.
Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60. doi:10.17351/ests2019.260
Not explicitly about machine learning, but about responsibility in the operation of semi-autonomous systems. The Three-Mile Island disaster and the crash of Air France Flight 447 were partially blamed on human error, since the machines functioned (mostly) as designed; but the design of the machines denied the humans the ability to understand the problems and appropriately react. We should rightly consider them failures of design. Similarly, when an Uber test vehicle killed a pedestrian in autonomous mode, the safety driver was blamed for not paying attention to the road; but a system that requires a human to spend 99.9% of their time watching and 0.1% of their time making a critical life-and-death intervention is one doomed to fail. The human is a moral crumple zone, absorbing the blame despite being set up to fail.
I see an analogy to the deployment of complex machine learning systems to aid human decision-making. If a human is aided by an inscrutable ML model, do we blame them when something goes wrong? Or do we blame the designers who relied on a human to catch problems, without providing the human any way to understand the system and detect errors?
Selbst, A. D. (2020). Negligence and AI’s human users. Boston University Law Review, 100, 1315–1376. https://ssrn.com/abstract=3350508
In tort law, your actions are negligent if they lead to injury to someone else, even though you did not intend injury, provided the injury could have been foreseen by a reasonable person. (Basically “well, you should have known!” in law.) But suppose you make a fancy machine learning system for decision support, say for helping doctors diagnose cases. If the ML system does something wrong, can anyone be held responsible? That would be difficult:
Because machine learning’s whole purpose is to detect patterns humans would not notice, it is very hard to tell in advance that the ML system is wrong. How were you supposed to know that these five million neural network weights would lead to a misdiagnosis five years from now?
Humans are not good at working together with a system they must constantly supervise. Drivers of mostly-but-not-completely autonomous cars find it difficult to exercise adequate supervision of the vehicle in case it does something wrong. If a doctor can trust a diagnosis system most of the time, but has to be constantly vigilant for the few cases where it will misfire, is it reasonable to expect the doctor to do a good job?
Also, computers are insecure, so you need to think about foreseeing the effects of security flaws.
Fundamentally these are difficult to solve. Even an interpretable and explainable model can find relationships that a human cannot easily check, making it difficult to foresee their harms. Suggests that liability law can’t solve this on its own, and possibly some kind of regulation of ML is necessary.
Páez, A. (2021). Negligent algorithmic discrimination. Law and Contemporary Problems, 4(3), 19–33. https://scholarship.law.duke.edu/lcp/vol84/iss3/3
Choice quote:
The first point to consider is that we are discussing algorithmic discrimination in 2021, not in 2012. By now, there is plenty of evidence—some of it presented in Part II—that the models used in the different stages of hiring decisions are very likely to be biased. In a sense, discrimination has become foreseeable by default, thus making these systems intrinsically harmful.
In cases like hiring, where algorithmic discrimination is well-studied and widely documented, the use of biased algorithms should be considered to be negligence. Legal standards of disparate impact and disparate treatment are hard to prove: disparate treatment requires proving the bias is intentional, while disparate impact requires proof the practice disadvantages a protected group and is not necessary for valid hiring reasons.