The problem of many hands is a central problem for ethics and accountability in companies and organizations: If many people work together to do something, and as a result cause harm, who is responsible for that harm? Who can be held accountable when no one person’s contribution is directly responsible for the harm?
Similarly there is the problem of many causes: If a project is large and extremely complicated, and causes harm because of an unexpected combination of factors nobody anticipated, who is responsible for that harm? When there are many causes for the harm, how can any one be identified as the responsible cause?
These problems seem abstract, but I see them as key problems for data ethics. It’s difficult to ensure data is used ethically when no one person feels responsible for harms resulting from whatever big complicated data-based system they help build.
See also Machine learning and law.
Luban, D., Strudler, A., & Wasserman, D. (1992). Moral responsibility in the age of bureaucracy. Michigan Law Review, 90(8), 2348–2392. doi:10.2307/1289575
If you are part of a large, complicated organization involved in a large, complicated project, and that project results in immoral or illegal activities, how responsible are you? You may not have had complete information; you may not have known how subordinates would implement your orders or what superiors were up to; you may not have been informed of the consequences of actions. Argues that, nonetheless, you have five moral obligations:These obligations are like the legal concepts of reckless or negligent behavior. If you join an organization, fail to meet these obligations, and thus contribute to some morally wrong act, you are responsible for recklessly or negligently facilitating that act. This is not as bad as deliberately committing the act, but also you are not blameless because you didn’t know or couldn’t stop it; you should have known and should have tried to stop it. And your managers are also to blame for building an organizational culture that does not encourage these obligations.
I wonder if there is an analogy to safety: airline pilots or ship’s captains have final authority over their craft so responsibility is not spread out but is concentrated in one person who has the authority to take action to prevent unsafe conditions. There is no “I didn’t know” excuse; you are responsible for finding out. This avoids complicated system failure cases where no one person seems responsible. Should there be a moral equivalent as well?
Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2, 25–42. doi:10.1007/BF02639315
“Firstly, holding people accountable for the harms or risks they bring about provides strong motivation for trying to promote or minimize them. Accountability can therefore be a powerful tool for motivating better practices, and consequently more reliable and trustworthy systems.” But software systems are developed by teams who assemble libraries developed by other teams on operating systems developed by still other teams. Failures often have multiple overlapping causes. (Nissenbaum conflates the problems of many causes and many hands in this paper.)
It is easy to say that because software is complicated, bugs are inevitable. But as our understanding of software engineering expands, some types of bugs can be prevented – the best practices of the field expand, just like the best practices of medicine have expanded. And so a bug that would have been inevitable 50 years ago is a blameworthy failure today. But software manufacturers explicitly disclaim any liability (or accountability) in their license agreements. Nissenbaum suggests three ways to improve accountability:
Explicit standards of care. If professional organizations wrote explicit best practice guidelines, just like those written by professional organizations in many other fields, we can hold accountable anyone who fails to follow the standards of care and hence causes a bug. A doctor who ignores standard hospital procedures can be held at fault for harm to a patient; if another doctor does everything possible to follow standard procedures but the patient is nonetheless harmed, we hold the doctor blameless.
Separate accountability from liability. Companies can shield individuals from liability, because companies can afford to be sued and individuals usually can’t. But we still need some way to hold the individuals accountable. Nissenbaum does not make the connection to professional organizations and licensing, but this seems like a good use for professional licensing: if professional engineers, lawyers, and doctors can be held individually accountable (by being stripped of their licenses to practice) even without being individually liable, why can’t other professionals?
Consider imposing strict liability. “Strict liability” means the software producer is liable for all harms caused by their software, even if the software producer did nothing wrong (i.e. followed all standards of care and made no obvious mistakes). Strict liability is already imposed for some products simply because it gives an incentive for companies to be extremely careful. If bugs are extraordinarily common, is not extraordinary care called for?
I am sympathetic to the arguments about accountability, but would like to see them drawn to their logical conclusion. Professional licensing seems the appropriate way to impose accountability; and professional licensing organizations seem the right forum to develop standards of care. I think all the same arguments apply to algorithmic decision-making and other applications of machine learning that might harm people: just as software is complex and has unintended consequences, models usually make unexpected mistakes and can be hard to understand. With no standards of care, we have no way to say someone was negligent when developing a biased algorithm; without licensing that can hold individuals accountable, there’s no reason for any one data scientist to feel they must take responsibility and stop a bad model before it is used.
Davis, M. (2012). “Ain’t no one here but us social forces”: Constructing the professional responsibility of engineers. Science and Engineering Ethics, 18(1), 13–34. doi:10.1007/s11948-010-9225-3
Attempts to dispose of several arguments against taking responsibility – the joke in the title being that, when asked who is responsible for some failure, an engineer might reply “Ain’t no one here but us social forces” to claim that nobody is responsible.
For the many hands problem: yes, outsiders may not know who is responsible, but those on the inside do. And in any event someone can take responsibility despite not knowing how the ultimate decision came about. For the many causes problem the resolution is more involved. First, engineers take a minimum responsibility of investigating to find the causes. Second, engineers feel obligated to investigate errors and advance the state of the art so similar errors cannot occur. Third, once the problem is investigated, there are usually three ways to assign responsibility:
Operator error
Organizational failure (such as failure to train the operator, conduct maintenance, etc.)
Design flaw (such as insufficient redundancy or software bugs)
Often all three are valid causes of the problem: a design flaw led operators to make an error that training and maintenance should have prevented. Yes, this makes blame hard to assign, but engineering responsibility is not about blame: it is about fixing the problem. Engineers hold themselves for fixing engineering problems, and so all of the many causes are in their domain.
I think this misses a key feature of being able to assign individual blame: making individuals feel they must take action to prevent a problem, i.e. making them feel the moral obligations described by Luban et al. (1992) above. But the discussion of engineering responsibility is interesting for what it reveals about data science: there is no culture of learning from errors, developing standards of practice, or of taking responsibility for preventing future errors.