I am usually frustrated by discussions of privacy, which usually treat it as an end to itself, or only beneficial to people who have “something to hide.” But in discussions about, say, government surveillance programs, privacy isn’t about hiding things—it’s a check on government power. In pithy terms: you don’t get to decide if you have something to hide. The people invading your privacy do, and their decision can have all sorts of negative consequences for you.
This also explains why invasions of privacy are harmful even if they are secret: secret surveillance still represents unchecked government power, making unaccountable secret decisions. Think of Kafka’s The Trial, not Orwell’s 1984.
On the commercial side, privacy is discussed mainly as protecting private data: certain kinds of data are “private”, and releasing them is bad. Anonymization, consent, and other measures are used to protect private data. But this does not address what the harms really are, or what it means for companies to use data to make decisions.
See also Surveillance capitalism, Interpretable and explainable models, Algorithmic due process, Online advertising, and Predictive policing on policing and privacy (in the form of 4th Amendment searches).
Phillip Rogaway’s The Moral Character of Cryptographic Work is a good argument in favor of the defense of privacy against mass surveillance.
James Q. Whitman, “The Two Western Cultures of Privacy: Dignity Versus Liberty”, 113 Yale Law Journal 1151 (2004). http://www.yalelawjournal.org/article/the-two-western-cultures-of-privacy-dignity-versus-liberty
There is a divide in conceptions of privacy between America and Europe, explored in a surprisingly lucid (for a law review) article. Whitman points out that in Europe, privacy is largely about dignity: the right to controlling your own public image and being free from insult or disparagement. This means, for example, that nude models have privacy rights in photographs of them, and may refuse their publication, even if the photographer clearly holds the copyright in the photographs. Similarly, credit reporting agencies exist in Europe in very limited form compared to America, since financial matters are nobody else’s business unless you are bankrupt or in default. Americans, on the other hand, largely conceive of privacy as protection against government interference.
(I can see a connection here between American and European views on copyright, particularly with the European notion of “author’s rights”, which extend beyond mere property rights to an inherent right of authors to control their work. See my review of The Public Domain; see also Copyright and intellectual property.)
Daniel J. Solove, “A Taxonomy of Privacy”, 154 U Penn Law Review 477 (2006). https://ssrn.com/abstract=667622
Soloves attempt to categorize what harms, specifically, arise from violations of privacy, ranging from surveillance to aggregation to disclosure to decisional interference. Some of the ideas lead to his next paper, below, on privacy as a check on power. Solove gives coherent arguments why the typical legal treatment of privacy – once something is in public, it is no longer private, and there are no restrictions on its dissemination at all – is wrong, and why the harms of privacy violation are more complex.
Daniel J. Solove, “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy”, 44 San Diego Law Review 745 (2007). https://ssrn.com/abstract=998565
Takes the privacy-as-liberty argument to perfection. Solove also wrote a book, Nothing to Hide, but I found it disappointingly oversimplified, with minimal discussion of opposing views or in-depth analysis of the issues.
Danielle Keats Citron and Daniel J. Solove, “Privacy Harms”, 102 Boston University Law Review (2022) 793. https://ssrn.com/abstract=3782222
To sue in a federal court, you must establish standing; to establish standing, you must show you have been harmed. This is a challenge when suing over privacy violations, as it is difficult to show a concrete harm that is analogous to harms in other legal contexts (like injury or financial loss). Citron and Solove give a taxonomy of harms that can be caused by privacy violations (mostly by the release of private information), and summarize the case law for each.
(It is peculiar that what could be philosophical consideration of the meaning of privacy is instead published in a law review with the sole motivation of addressing limits in the types of lawsuit that can be brought in federal court. But that is the nature of legal scholarship.)
Daniel J. Solove, “The Myth of the Privacy Paradox”, 89 George Washington Law Review (2021). https://ssrn.com/abstract=3536265
The “privacy paradox” is that people simultaneously say they greatly value privacy and hand out their personal information freely to websites and companies. This has been used to argue that either (a) privacy regulation is based on falsehood, because people actually don’t value privacy, or (b) people are manipulated into making poor privacy decisions. Solove argues that the paradox is not paradoxical. Handing out a personal detail to a website in a behavioral economics study is not a measure of your value of privacy; privacy is not synonymous with secrecy. The value of privacy depends on who you’re sharing data with and what they will do with that data, and people generally do not have much information on how their data will be used or who it will be shared with. (Many think that personal information sharing is much more restricted than it actually is.)
Helen Nissenbaum, Privacy in Context, Stanford University Press (2009).
Proposes the theory of “contextual integrity”, that privacy depends on “context-relative informational norms.” To determine if some new technology or policy threatens privacy, we must determine the context it affects, the existing norms of information flow in that context, the values motivating those norms, and how the new policy would affect them. This is about information flow; Nissenbaum denies that individual pieces of information are public or private, insisting that norms instead govern how information flows between people. Some things we will share with our doctors but not with the person next to us on the airplane.
Contextual integrity does not provide a single test to see if some new thing is bad because it violates privacy, but does point out that information is not public simply because it has been shared, and that who information is shared with for what purpose is as relevant to privacy as the nature of the information itself.
I suppose this fits well with Solove’s argument: much of the harm of privacy violations comes from how information can be used, not from the revealing of information on its own.
Salomé Viljoen, “A relational theory of data governance”, 131 Yale Law Journal 573 (2021). https://ssrn.com/abstract=3727562
Privacy is usually discussed as an individual right: data about you can be used against you; data about you can be leaked or disclosed or used to embarrass you. Hence people propose solution based on individual rights: requiring consent, treating data as property, requiring financial compensation for data use, etc. But collection of my data affects you: Amazon collects my transaction data to build models that get used to more effectively sell to you; the government tracks my messages to know if you’re a criminal; Facebook tracks my Internet activity to better sell ads to you. Such is the nature of building models on large datasets. So any discussion of privacy rights must recognize they are collective, not individual, and individual solutions aren’t enough.
There is also an argument here about data reinforcing inequality, and about “data as a democratic medium” as a solution. Unfortunately it’s hard to track the argument here, since the writing could benefit greatly benefit from a visit by the ghost of Fred Rodell.
Solon Barocas and Helen Nissenbaum, “Big Data’s End Run around Anonymity and Consent”, in Privacy, Big Data, and the Public Good: Frameworks for Engagement (Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum, eds.), pp. 44-75 (2014).
Anonymity and consent are frequently used to protect privacy. But anonymity is not adequate. Yes, people can be reidentified from supposedly anonymous data, but that’s not the big problem: the big problem is that you can use data to make decisions about people without ever having to know their names. The value of anonymity is that it limits your reachability, i.e. the ability to impose consequences on you or connect your actions back to you; but if an advertiser or data broker can make decisions about you without ever knowing your name or Social Security number, you are reachable even if you are anonymous. Data can even be used to make decisions about you even if the data is not about you, such as advertising models based on data from other shoppers. That also implicates informed consent, because how can you consent to collection of data that affects other people? And if data-sharing is complicated, ongoing, and opaque, involving all kinds of intermediaries and brokers and exchanges, can informed consent be meaningful?
Danielle Keats Citron, “Sexual Privacy”, 128 Yale Law Journal 1870 (2019). https://ssrn.com/abstract=3233805
A discussion of issues like sextortion, leaked nude photos, hidden cameras, “deep fake” videos, and other unwanted disclosures of sexual or intimate information. Makes the argument that because intimacy and sex are so core to our identities, personal control over the sharing of sexual information is essential, and privacy allows freedom in our personal lives to explore our identities without fear of shame or retribution. Suggests legislative fixes to ban common violations of sexual privacy.[To read] M. Ryan Calo, “The Boundaries of Privacy Harm”, 86 Indiana Law Journal 1131 (2011). https://ssrn.com/abstract=1641487
How do people actually behave when making decisions about privacy?
Richard Posner, “Privacy, Surveillance, and Law”, 75 University of Chicago Law Review 245 (2008). http://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=5655&context=uclrev
A contrary perspective, making an ultimately unconvincing argument that warrantless surveillance is necessary for effective counterterrorism; I think detecting terrorists from mass Internet taps and surveillance is an intractable classification problem, and that terrorism is an overblown threat.
Jack M. Balkin, “Information Fiduciaries and the First Amendment”, 49 UC Davis Law Review 1183 (2016). http://ssrn.com/abstract=2675270
Jack M. Balkin, “The Fiduciary Model of Privacy”, 134 Harvard Law Review Forum 11 (2020). https://ssrn.com/abstract=3700087
Summarized in an article in The Atlantic. Argues that regulating the use and disclosure of private data by companies usually violates the First Amendment – you can’t prevent companies from saying true things about their customers. Suggests instead making companies “information fiduciaries”: just as your doctor, attorney, or accountant have professional obligations to act in your best interest and keep your information private, Facebook could have an obligation to act as a fiduciary with your data. Congress can regulate the speech of fiduciaries because their interaction with you is not part of public discourse, but an unequal relationship where the fiduciary has great knowledge or expertise you do not.
This would apply both when companies represent themselves as being trustworthy, or even just because of the business they’re in. This would also preempt the third-party doctrine, because we do have a reasonable expectation of privacy in an information fiduciary. To motivate businesses to voluntarily become information fiduciaries, the federal government could preempt state privacy laws for fiduciaries, so becoming a fiduciary negates the need to comply with fifty different conflicting state rules.
The second article, a reply to Khan and Pozen’s critique of Balkin’s idea, has the best part:
I suspect that what Khan and Pozen are really getting at is that fiduciary obligations to end users will require today’s digital corporations to change their existing business models. By (finally) putting end users first, they will have to put corporate profits second. Khan and Pozen argue that advocates of the fiduciary model have not been sufficiently candid about this. (They do not accuse me of any lack of candor.) They tell us that their critique will be a “(partial) success” if advocates make clear that making digital businesses information fiduciaries will require “sacrificing stockholders’ economic interests to advance users’ noneconomic interests” in privacy. To which I can only reply: Congratulations, your article is a success!
Neil Richards and Woodrow Hartzog, Taking Trust Seriously in Privacy Law, 19 Stanford Technology Law Review 431 (2016).
An argument to make trust key to privacy law. For instance, when we disclose information to friends, we expect they will be discreet – they will not share it with anyone and everyone. Similarly, there should be a legal expectation that when we share data with a company, they should be discreet with it. Most interesting is the expectation of loyalty. A company should be loyal to the people who give it data, meaning it should not use that data against them – to manipulate or take advantage of them, for instance.
Neil Richards and Woodrow Hartzog, A Duty of Loyalty for Privacy Law, 99 Washington University Law Review 961 (2021).
An expansion of the privacy-as-trust argument, arguing that the law should treat privacy as requiring loyalty: someone who collects data about you has an obligation to be loyal to your interests. Works through the details of the specific rules that would be needed to make such a law work.
Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma”, 126 Harvard Law Review 1880 (2013). https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/
“Privacy self-management” refers to rules giving individuals control over their privacy by requiring them to consent to the collection and use of data. Solove contends that “Privacy self-management does not provide people with meaningful control over their data”, because (a) it is very difficult to make rational decisions about privacy, (b) there are so many entities collecting and using data that you could never have time to manage them all, (c) many privacy harms come from aggregation of data rather than individual data collection, and (d) privacy has social benefits as well as individual benefits. Paternalism is not the answer, because consent is important and people may legitimately make different decisions about their privacy; Solove proposes more careful regulations.
David C. Gray and Danielle Citron, “The Right to Quantitative Privacy”, 98 Minnesota Law Review 62 (2013). http://ssrn.com/abstract=2228919
Proposes a different test for Fourth Amendment violations: instead of asking “how much data did you collect about this specific person?”, ask “could this technology facilitate broad and indiscriminate surveillance if left unchecked?” If so, Fourth Amendment protections should apply, even if you only use the technology in a specific case for something very minor.
Kevin Bankston and Ashkan Soltani, “Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones”, 124 Yale Law Journal Online 335 (2014). http://www.yalelawjournal.org/forum/tiny-constables-and-the-cost-of-surveillance-making-cents-out-of-united-states-v-jones
An interesting practical approach to the “reasonable expectation of privacy” test. New surveillance technologies should be compared to previous technologies by the cost required to acquire information about suspects, and “if the new tracking technique is an order of magnitude less expensive than the previous technique, the technique violates expectations of privacy and runs afoul of the Fourth Amendment.”