- I. Introduction
- II. Forthrightness and Consumer Protection
- A. Harm
- B. Addressing Information Flow Problems
- C. Forthrightness Defined
- D. Privacy’s Words
- E. Forthrightness Applied
- III. Forthrightness and Artificial Intelligence
- IV. Conclusion
We want our software not to lie to us. The Federal Trade Commission and state attorneys general have long enforced laws prohibiting “deceptive acts or practices,” and recently, they have started bringing actions against companies for deploying “deceptive code” or “deceptive design.”
But as every parent knows, not lying is not the same thing as being forthright. Even when software isn’t deceptive, far too often it still is not as honest as it could be, giving rise to consumer harm, power imbalances, and a worrisome restructuring of society. With increasing and troubling frequency, software hides the full truth in order to control or manipulate us.
What if regulators and law enforcement agencies could mandate not just non-deceptiveness but also forthrightness from our code? Forthrightness would obligate companies to be completely honest, direct, and candid. Importantly, forthrightness would impose an affirmative obligation to warn rather than a passive obligation to inform. A forthright company will anticipate what a consumer does not understand because of cognitive biases, information overload, or other mechanisms that interfere with information comprehension, and will be obligated to communicate important information in a way that overcomes these barriers.
Under a forthrightness obligation, companies might be punished for deploying what are known as dark patterns, such as prominently displaying a button saying, “Yes, I agree,” while making the “No, I disagree” choice hard-to-spot and hard-to-click. We could begin to assess not only what a company said but also what a company concealed. It might become illegal to exploit a user’s known biases and vulnerabilities. This Essay makes the case for creating new laws to obligate developers to use forthright code.
Forthrightness will prove especially important to address looming new challenges from artificial intelligence (“AI”). AI systems are prone to produce discriminatory and unfair outputs, and it is hard to understand what is going on in these opaque and complex systems. Unlike calls for fairness or transparency, forthrightness will focus on the actions of the entity developing or using the AI system, rather than describe a desirable attribute of the system itself. Forthrightness will arise most importantly when AI systems make egregious mistakes. In these moments of failure and potential crisis, the forthright actor will try to fix the fundamental problem plaguing the system rather than hide the problem beneath a thin veneer of obfuscation.
This Essay proceeds in two parts. Part II defines forthrightness and examines the role it might play in consumer protection. Part III applies forthrightness to concerns about artificially intelligent systems.
II. Forthrightness and Consumer Protection
As members of a legal system and a broader society, we do a stunningly bad job trying to strike a sensible balance between private innovation and public welfare; instead of finding a proper balance, we put most of the weight on the side of innovation (and economic growth) despite what it might mean for the well-being of individuals. We have sacrificed privacy—the area I primarily study—to support targeted advertising. We have created social networks that amplify lies and stoke division. We have subjected victims—including children—to harassment, cyberbullying and cyberaddiction. We need to do better.
The problems I am focused on are multifaceted and multilayered, and I don’t expect any solution to begin to address them all. In this Essay, I will argue that we should enact new laws that obligate companies that handle information about us to be “forthright.” I’ll say much more about what this means, but understand that I am focusing primarily on improving the flow of information between companies and consumers. Forthrightness attempts to ensure honest and candid communication between our services and ourselves, on the theory that better information will lead individuals and their advocates to detect and avoid certain harms.
Forthrightness will not directly attend to the substantive wrongs that the spread of technology has wrought, like harassment and abuse, invasions of privacy, or theft and misappropriation. Better information through forthrightness might help individuals anticipate and avoid harms like these, but given the depth and prevalence of these problems, forthrightness alone is unlikely to be a panacea.
The information flow problems I am worried about fall into four categories that forthrightness might help us address: the way companies break promises, manipulate consumer choices, deploy deceptive “dark patterns,” and hide behind clouds of opaqueness and evasion.
Broken Promises. Since its birth, Snapchat has been widely embraced as a way to share disappearing photos with people online. The privacy and discretion the app promised were more than a nice feature—it was the service’s defining raison d’etre, the reason a teenager would choose it to communicate rather than Facebook or instant messenger. Snapchat’s entire marketing strategy centered on this feature. It chose a friendly ghost as its mascot. It displayed photos of girls above a time-selection dial that showed the purported ease with which images could be controlled in fine detail, conveying an implied promise of discretion and maybe even titillation.
As we’ve learned too well by now, the central promise of Snapchat has always been a bit of a lie. A well-timed screenshot taken by a recipient can seal your infamy forever (or at least for the rest of high school), and the app doesn’t always tell you a screenshot has been taken. Even worse, third-party apps have been able to use Snapchat’s Application Programming Interfaces (“APIs”) to harvest photos for long-term storage, in an expectation-busting move that anticipated by a few years the recent news about Facebook and Cambridge Analytica, but more on that in a bit.
Manipulation. Like many, I own a smartphone, and I’m addicted to it. As a symbolic stand of defiance against giving my entire life up to this device, I try to keep the GPS, location tracking features of my phone switched off as often as possible. Of course, I am aware that there are other ways my location will be tracked. But by denying apps access to my hyperprecise GPS trail, I am improving my security from certain types of harms, if only marginally.
This means that I live in a perpetual state of deprivation. I am the pitiable guy who opens Google Maps and is forced to pinch and pan to find out where I am standing on the globe, in order to pinch and pan again to tell Google to give me directions for my destination.
This has been a long, lonely, and fruitless battle. Google has no doubt devoted hundreds—if not thousands—of programmer and designer hours to trying to figure out how to convince the few, sad hold-outs like me that today, at long last, is the day I join the masses who are able to pinpoint their location and the nearest Chick-Fil-A at any time, automatically, and within the blink of an eye. On a nearly daily basis, my phone gives me a pop-up imploring me to turn on location tracking services. They use every psychological trick they can muster to manipulate me to make the choice. Yet I resist.
I wonder what will happen the day I finally decide to join the non-geolocationally-challenged witnesses of the light. Will Google Map’s UI help me regret my choice? Will it bug me on a daily basis to think about shutting off location tracking, perhaps offering me little examples of the risks I expose myself by subjecting myself to systems of comprehensive corporate surveillance of my location? I doubt it. I think this is a ratchet (and maybe a bit of a racket), nudging me toward Google’s self-interest but never nudging others in the direction of self-preservation or privacy.
Dark Patterns. This is just the tip of the iceberg of subtle manipulations we are all subjected to on a near-constant basis. Woody Hartzog writes about “abusive design,” such as malicious pop-up advertising that masks phishing scams by pretending to be games, lotteries, or security warnings. He cites the work of two sets of researchers who have been tracking examples of this kind of behavior. Gregory Conti and Edward Sobiesk collect and categorize examples of “malicious interfaces” that “deliberately sacrifice the user experience in an attempt to achieve the designer’s goals ahead of those of the user.” Harry Brignull has given examples like these a salient name, “dark patterns,” which are “tricks used in websites and apps that make you buy or sign up for things you didn’t mean to.”
Both have created taxonomies of malice. Conti & Sobiesk refer to “[d]istraction,” meaning “[a]ttracting the user’s attention away from their current task by exploiting perception, particularly preattentive processing.” Brignull’s categories are more colorful, for example, “confirmshaming,” a personal pet peeve of mine, when the option for opting out of tracking is worded to make you feel guilty for your choice. A real-world example is a website pop-up giving two options: “Turn off Ad Blocker” or “I am a bad person.” Brignull collects dark pattern examples like these in an online hall of shame.
The dark pattern label, in particular, is beginning to enter the popular lexicon, as evidenced by reporting by the popular press, and might become a powerful rhetorical label to inspire policy changes. If it does, I think a little will be lost because the “dark” adjective doesn’t pack a sufficient punch. We should consider calling these (or some particularly bad subset of them) “sinister code.” I will continue to refer to them as “dark patterns” in this Essay, just to follow the developing convention.
Opaqueness and Evasion. The final category of information I am worried about is opaqueness and evasion. Companies go to great lengths to design systems that make it difficult to discover the piece of information consumers might most want to discover or need to know to make an educated decision. Uber has designed a system called Greyball, which lets it deny service to customers it has deemed disfavored for some reason. You’re never told that you’ve been “Greyballed”; instead, Uber cars appear as available and then disappear, fading like ghosts, before you can hitch a ride. The company says it invented the service to deal with customers who had harassed drivers, but we now know it has been used for far more nefarious purposes. Uber used Greyball to detect people it suspected to be city transportation regulators, denying them rides, just in case they were engaged in governmental fact-gathering.
The Motherlode. The University of Houston’s Institute for Intellectual Property and Information Law showed exceptional prescience by scheduling the lecture on which this Essay is based for just days after we all learned about the information gathering practices of Cambridge Analytica. As we now know well, Facebook, in an attempt to move from being an advertising medium to a platform, like the Apple iPhone or Google Android system, gave third-party developers a rich API that let them mine data from users who agreed to download their apps. The purpose was to allow for rich and interactive functionality, but the side effect was to give developers a frictionless way to amass massive digital dossiers to reuse and resell.
In doing so, Facebook executed a masterclass of harm, resulting in all four of my concerns. First, Facebook broke the promises it had made to its users by allowing third parties like Cambridge Analytica to take so much information. Second, Facebook created a platform that seemed like it provided engaging and silly diversions from the worries of the day when in reality, it created a platform that could be used to amass massive dossiers about millions of people. Third, the harm was exacerbated by Facebook’s confusing warren of privacy settings, which have long been a paradigm example of dark patterns. Fourth, Facebook reportedly knew about this specific incident for several years but hid news of it until a whistleblower leaked the story to the press. A robust and enforceable obligation of forthrightness would have found fault in all of these failures. At the very least, forthrightness might’ve led us to learn about these violations long ago. More importantly, a legal obligation for forthrightness might have led Facebook to avoid the path of dishonesty and obfuscation it chose from the outset.
For the remainder of this Part, I will focus on the aspects of the harms above that are related to privacy. Not only is this my primary area of scholarly focus, it also allows me to build on the deep and rich body of privacy law scholarship. I could have easily focused on closely related issues in coordinate areas of law and policy, such as competition, telecommunications, discrimination, or civil rights. An obligation of forthrightness would help address problems and avoid harms in all of these areas.
B. Addressing Information Flow Problems
Outside the United States, many countries address the types of problems listed above by enacting comprehensive legislation mandating what are known as Fair Information Practices (FIPs). These provide a detailed set of obligations for entities that process information about individuals and tend to be backed by strong enforcement powers invested in national or subnational data protection authorities. A well-known example of such a law is the European Union’s General Data Protection Regulation (GDPR), which took effect in May 2018.
The United States has not yet embraced a comprehensive data protection law based on FIPs. Instead, it has created sectoral privacy laws that regulate specific industry sectors, such as healthcare or education. The only federal law that sweeps broadly across industries is the Federal Trade Commission Act, which empowers the Federal Trade Commission (“FTC”) to police “unfair or deceptive acts in or affecting commerce.” Rather than implement the FIPs or other substantive privacy protections, this law has been interpreted to tackle some of the information flow problems listed above but in a narrow and incomplete way. Despite the best efforts of the FTC, this law has not done nearly enough to stem the harms I focused on above.
My proposal to create a new obligation of forthrightness builds on recent work that has gained attention in the wake of the Cambridge Analytica scandal. Professors Jack Balkin and Jonathan Zittrain have supported treating certain large corporations as trusted “information fiduciaries,” which would impose on them new legal obligations. Professors Woody Hartzog and Neil Richards have similarly suggested creating new legal obligations of “loyalty.” Consider each of these examples of present or proposed law in turn.
1. FIPs and the GDPR
The European Union’s General Data Protection Regulation (GDPR) provides much thicker regulation of personal data use than the FTC’s authority and can illuminate some pathways we might take to encourage forthrightness. The GDPR serves as a leading example of a law based on Fair Information Practice (FIPs) principles. FIPs are a set of practices that help protect the privacy of individuals. They have appeared in dozens of varying lists of principles in national and international laws and in other international declarations.
The GDPR’s version of the FIPs provides data subjects—the individuals associated with the personal data being processed—enforceable substantive rights, including rights to be informed and object; of access, rectification, erasure, restriction of processing, and data portability; and those that relate to automated decision-making. These rights cover much more substantive scope than the FTC’s unfairness and deception prohibitions.
Many of these provisions predate the GDPR, having first been developed in the Data Protection Directive, enacted in 1995. The GDPR has nevertheless spurred great amounts of attention and activity, primarily because it provides for expanded enforcement. For example, data processors—entities that process data on behalf of “data controllers”—are regulated directly under the law and may even be subject to civil suits by data subjects.
It is a bit too early to know exactly how aggressively the GDPR will be enforced by EU data protection authorities—it first became enforceable on May 25, 2018. Time will tell whether it will be used to punish companies for the use of dark patterns and other information flow problems on which I have focused.
2. The Limits of Anti-Deception at the FTC
For those who think about how we might use law to combat information flow harms like these, the role of the Federal Trade Commission (“FTC”) looms large. The FTC has become the single most important regulator of privacy in government today here in the United States. The FTC polices privacy by using its powers found in Section 5 of the FTC Act: the activities of companies must be neither “unfair” nor “deceptive.”
At least as those two words have been developed in the FTC’s cases, the agency seems not to be doing enough work to address the full range of concerns I am worried about. “Unfairness” has been tied to harm. Causing another person harm is deemed unfair and illegal according to how the FTC has interpreted its statutory mandate. But because of the way the FTC has tied its definition of harm to an economic cost-benefit analysis, unfairness is unlikely to be a powerful tool in this area.
This leaves the FTC more often using its power to police “deception” when it proceeds in privacy cases. The vast majority of FTC cases interpreting this word have been raised against flatly deceptive statements. In one of its earliest Internet privacy cases, the FTC settled charges of deceptive behavior by the drug manufacturer Eli Lilly. Lilly had created a Prozac.com website and collected email addresses from people interested in receiving personal email reminders concerning their medication or other matters. Lilly inadvertently revealed the identities of 669 people who had requested this information when it included their addresses on the “To:” line rather than the “bcc:” line of a group email. According to the complaint the FTC filed, this broke Lilly’s promise “to maintain and protect the privacy and confidentiality of personal information obtained from or about consumers.”
The FTC has occasionally pushed the boundaries of this definition, for example, by punishing material omissions rather than deceptive statements. This builds, of course, on a rich jurisprudence of material omissions under related legal regimes such as laws against fraud. Purveyors of spyware have acted deceptively; the FTC has charged that, by including surveillance functionality, these purveyors failed to disclose to the person under surveillance.
Most interestingly, the FTC has begun tentatively to explore the concept of “deceptive design.” For example, the agency brought a case against Snapchat for the confusion about the permanence of shared photos described above. The FTC’s novel legal theory was the assertion that Snapchat’s user interface design—as opposed to a written promise or deceptive omission—either exacerbated or maybe even wholly constituted the deception. In its complaint against Snapchat, the FTC argued that “Snapchat has represented, expressly or by implication, that when sending a message through its application, the message will disappear forever after the user-set time period expires.” To support this claim, it included in its complaint a screenshot of the expiry selection dial user interface from the app. The FTC interpreted this design element to be an implicit deceptive statement, a promise of evanescence the app did not deliver.
New theories of deception like this one would address only my first category of worrisome behavior, “broken promises.” Manipulation, dark patterns, and opacity often don’t involve false statements that can give rise to a deception investigation.
3. Information Fiduciaries
Other scholars have proposed ways to add new teeth to the anti-deception work of the FTC (and state attorneys general, most of whom also have the power to police deception and unfairness). These scholars have focused primarily on identifying which companies ought to be subjected to new obligations, rather than elaborating on the substance of these obligations.
Jack Balkin and Jonathan Zittrain have called for the recognition of new “information fiduciary” obligations, drawing inspiration from the fiduciary duty obligations of principal-agent law. Just as a financial institution is required by both statutory and common law to protect the sensitive information of its customers, so too should online service providers such as email providers, search engines, and social networking services owe special duties to their users.
Balkin and Zittrain argue that digital businesses should be obligated to, at the very least, “not act like con men,” meaning they should not “induc[e] trust in end users and then actively work against their interests.” They boil down the obligations they propose to three general sets of principles: obey the fair information principles, do not discriminate unfairly, and do not abuse trust. In the abstract, these sound reasonable, but in execution, they are awfully vague and underdeveloped. Forthrightness has a concreteness and specificity that the role of an information fiduciary does not.
They can be forgiven for not fleshing out these substantive obligations better because they seem much more interested in who should be considered an information fiduciary rather than what this designation requires. Balkin, for example, writes that we would treat some businesses as information fiduciaries “because of their importance to people’s lives, and the degree of trust and confidence that people inevitably must place in these businesses.” They list as some examples, “Facebook, Google, and Uber.”
I sense that Balkin and Zittrain would place the yoke of information fiduciary status on a small number of companies, like the three giants listed above. They seem content with the relatively unregulated approach we currently have for the vast majority of companies, which I have already made plain will leave us vulnerable to exploitation and other harm. This squares with at least some of Professor Zittrain’s other writing, which has emphasized the critical importance of light regulation for tech companies as a way to shelter free speech and innovation.
Forthrightness would reach much further. Unlike cherry picking a few companies for special status, every company that holds personal information should be obligated to be forthright, and the more sensitive the information held, the more the obligation of forthrightness should require. The Cambridge Analytica story demonstrates the peril of living in a world where small companies can extract sensitive information about us while assuming that pre-information-age obligations of anti-deception and anti-unfairness are all we need to protect us.
Woody Hartzog and Neil Richards build directly on the information fiduciary concept, framing their proposal for reform using a different word: loyalty. Loyalty “is an obligation to avoid in self-dealing at the expense” of the user. Like Balkin and Zittrain, Hartzog and Richards offer the tantalizing prospect that loyalty will help deal with a wide swath of behavior we consider unsavory, if not strictly deceptive. They suggest that it is disloyal to quote higher prices to Mac users, as the travel website Orbitz is alleged to have done; skew what news users get to view to test whether they can manipulate some users to be happier and others sadder, as Facebook infamously did; or use Big Data unfairly to discriminate. They suggest loyalty can also be a sword to combat perhaps milder practices, such as sharing anonymized habit data with outside researchers, creating behavioral profiles to sell to advertisers, and maybe even mining data to improve the company’s own services.
Forthrightness and loyalty overlap quite a bit. There might be very little that one concept accomplishes that isn’t also covered by the other. Still, loyalty carries some connotative baggage that might limit it, which forthrightness does not. The word loyalty focuses on relationships and obligation. Strictly speaking, most people are loyal to a very small number of people: mostly family and dear friends. In more casual conversation, we might extend our sense of loyalty a bit further: to an alma mater, a favorite restaurant, or a professional sports team. In law, directors and officers owe corporations their duty of loyalty.
Loyalty seems like an incomplete fit for the casual, shifting, memetic, information ecosystem in which we find ourselves these days. I feel no loyalty to an app I downloaded five minutes ago to help me time the food in my oven, so it seems odd to suggest it owes me any loyalty in return.
But my project supplements rather than diverges from loyalty. It’s better to supplement words of relation like loyalty to a word of intrinsic character like forthrightness. That oven timer app, which does not owe me loyalty, still ought to be forthright.
C. Forthrightness Defined
I propose a new legal obligation: companies that process personal information must be forthright in what they choose to communicate, in the content of those communications, and the design of the interface they use to communicate. From the plain meaning of the word, to be forthright is to be “direct and outspoken,” “straightforward,” and “honest.” Dictionaries suggest that the word carries the connotation of frankness, directness, and maybe outspokenness or bluntness. Antonyms are “secretive” and “evasive.”
Forthrightness carries additional connotative baggage. It usually arises under trying conditions, when something arises that less forthright individuals would be tempted to deny or hide. Nobody calls a person who speaks happy truths forthright; only the person with secret, negative information to reveal earns the compliment. The forthright individual makes an affirmative choice to volunteer this information, often against an instinct for self-preservation.
A forthrightness mandate requires companies to be honest, direct, and candid. The most important change from today’s anti-deception rules is a shift from the consumer to the company of the primary burden of investigating and clarifying what is understood. Forthrightness would impose an affirmative obligation to warn a consumer about conditions that likely matter to the consumer. A forthright company will anticipate what a consumer does not understand and volunteer that information.
Forthrightness would require much more than the FTC’s anti-deception authority. Unlike the obligation not to deceive, an obligation for forthrightness would implicate what companies choose not to disclose, not only what they choose to say. Withholding information that would help a recipient of information understand the context of information sharing would not be forthright.
Forthrightness will also focus intently on how information is presented. This legal obligation is expressly designed to combat dark patterns: sinister code designed to obscure, confuse, and otherwise take advantage of the cognitive biases of users.
In a sense, forthrightness shares much in common with another obligation that Hartzog and Richards propose imposing on companies: honesty. Honesty requires “an affirmative obligation . . . to correct misinterpretations and to actively dispel notions of mistaken trust.” It “includes an obligation to make sure that trusters are actually aware of things that matter to them.” They hold out the FTC’s case against Snapchat for “deceptive design” as a demonstration of what the obligation of honesty looks like. “By focusing on the signals given off by the user interface of the software and design of marketing materials, this complaint encourages more honest and trustworthy design.”
What, then, is the point of offering a new word if Hartzog and Richards already identify a common word that does much of the same work? First, honesty and forthrightness are not identical concepts; they instead are best seen as points along a continuum of correcting misinformation. Forthrightness suggests a higher obligation to identify and share discreditable information than mere honesty. For example, using any of the “dark patterns” mentioned earlier is a failure to be forthright; it is not so clear an obligation of mere honesty would address all of these.
Second, honesty is such a commonplace word with a broad range of shadings and connotations that I worry that it will be misconstrued or manipulated to mean something less robust than Hartzog and Richards have proposed. Forthrightness, being a narrower and less common word, is less susceptible to this kind of treatment. The tendency for words to be proposed by well-meaning reformers but mangled to mean nearly the opposite during implementation has a long pedigree in privacy scholarship and policy. To appreciate this history, let us take a momentary diversion: a review of the odd corner of privacy scholarship focused on identifying and coining important words and phrases.
D. Privacy’s Words
To the uninitiated, this focus on individual words—loyalty, forthrightness, deception—may seem strangely technical and reductive. My proposal belongs to a crowded yet confessedly odd corner of privacy law scholarship: the academic deep dive into a single word or phrase as a way to reconceptualize privacy or to propose new approaches to privacy protection. As this model has not been itself subject to scrutiny, consider the virtues and shortcomings of this approach, before we return to forthrightness.
1. The Thesaurus as Scholarship
Because privacy is a vague and contested concept, those of us who try to translate it into actionable legal principles are left grasping for analogs. This has directed privacy law scholars to the thesaurus. There is a distinctive strand of privacy law scholarship made up of arguments, each advancing a particular word or phrase to try to illuminate the legal mechanisms and outer boundaries of what privacy law ought to require.
There are three distinct subgenres of using the thesaurus to argue about privacy. Scholars propose words or phrases to conceptualize privacy, to describe ways to use law to protect privacy, and to name specific legal duties or obligations on entities or individuals to bring about privacy. These proceed from the general to the specific.
Work in the first category tries to conceptualize the word “privacy” through reference to another word to try to capture the value or importance of privacy. In the 1960s—when you could have counted privacy law scholars on one hand—the word was “control.” Privacy is about the control of information about ourselves, according to Alan Westin and Charles Fried. Privacy law has never really shaken this connection to control, and many have argued that control is still at the heart of the law’s current focus on notice-and-choice and on terms of service and privacy policies.
More recently, people have tried to root privacy in different soil, based on other words from the thesaurus such as autonomy, intimacy, and dignity. Scholars like my colleague and former deliverer of this lecture, Julie Cohen, have argued that when we are deprived of privacy, we are deprived of autonomy and cannot develop into the potential versions of ourselves we would otherwise become.
Over the past decade or two, other pages have been ripped from the thesaurus by privacy law scholars: Julie Inness and Jeff Rosen have talked of privacy as intimacy, and Paul Schwartz has described the relationships between privacy and democracy.
Perhaps the most influential example of this genre has been the work of Helen Nissenbaum, who has argued that privacy is defined only in context—universal accounts of privacy are likely to be too generic to be actionable. Specifically, Nissenbaum develops a tool called “contextual integrity.” Contextual integrity is a “benchmark for privacy,” one that measures whether “informational norms are respected” given the specific context.
The second subgenre of privacy thesaurus writing identifies words or phrases that describe approaches for using the law to protect privacy. Balkin and Zittrain’s proposal to treat some online giants as information fiduciaries falls into this category.
A prominent work in this subgenre is by Professors Deirdre Mulligan and Ken Bamberger, who argue that privacy tends to be protected “on the ground” even if it’s not protected robustly “on the books.” By this, they mean that although incomplete and thin privacy laws may seem insufficient to protect privacy, they nevertheless have been enough to spur companies to hire empowered privacy professionals, such as Chief Privacy Officers, who have, collectively and informally, constructed a thick, meaningful layer of de facto privacy protection.
The third subgenre of privacy thesaurus writing focuses on specific legal duties or obligations we can use to protect privacy. The FTC’s unfairness and deception authorities fall in this group. Hartzog and Richard’s argument for a legal obligation for loyalty springs from their conception of privacy as trust enhancement. I place forthrightness in this third category. My proposal isn’t about conceptualizing privacy—privacy isn’t defined as forthrightness. Nor is it a broad systemic approach to protecting privacy at the level of “information fiduciaries” or “privacy on the ground.” It is a specific, concrete legal obligation we ought to create—the better to protect privacy and avoid harm.
2. Resisting Redefinition
This is my first foray into the thesaurus style of scholarship. I’ve been skeptical that the way to address privacy harms is to find one new word or phrase that will help us see privacy or privacy regulation in a brand-new light. As I join this particular fray, I will consciously resist what I see as the great downside of some of the most influential of this work: the tendency to define a word or phrase that can easily be co-opted by those who are more interested in maximizing the free reign of corporations than in protecting the privacy of individuals.
I’m thinking in particular of “privacy on the ground,” by Bamberger and Mulligan, and “privacy in context,” by Nissenbaum. Both ideas are phenomenally influential; it could be argued that these two ideas have dominated policy debates over the past ten years. My concern is that both of these have been co-opted by industry participants who have used them to argue that our thin, sectoral privacy laws can still lead to meaningful privacy protection—probably in ways that have been contrary to what the progenitors of these ideas intended.
Industry officials attempting to protect the status quo of light-to-no privacy regulation have found it easy to hijack these ideas to serve their purposes. They have learned to cite the phrases as shorthand for simple ideas without attending to the rigorous way in which the phrases were originally intended to be used. This is scholarship as a bumper sticker or magical incantation.
“Privacy on the ground” has been used to support arguments that privacy protection in the United States is strong and robust, even though the formal legal framework is weak, thanks to the hard work of privacy professionals. Defenders of industry like Omer Tene cite this work frequently to urge companies to name Chief Privacy Officers. These calls raise concerns about self-interest, given that Tene works for the International Association of Privacy Professionals, a dues-based organization that grows in power and wealth with the rise and expansion of the ranks of Chief Privacy Officers.
“Privacy on the ground” has also played a central role in debates in the EU about the adequacy of American privacy law for the GDPR and Data Protection Directive data transfer rules. The stakes are especially high: if the U.S. system of privacy law is not deemed adequate, then companies based in the United States will not be permitted to transfer data out of the EU. In a sort of “who are you going to trust, me or your lying eyes?” move, U.S. advocates have argued that we shouldn’t look exclusively to formal privacy law because of our strong protections on the ground.
The hijacking or co-opting of “privacy in context” has been even more widespread and especially pernicious. One, after all, could argue that people like Tene and the defenders of American privacy law are using Bamberger and Mulligan’s work faithfully, not hijacking it. It’s much more difficult to defend the way Nissenbaum’s work has been warped beyond recognition. Nissenbaum defines a rigorous—you might even call it arduous—formal methodology for identifying privacy violations. Bumper sticker users of “privacy in context” use it instead as shorthand as a deregulatory argument against virtually any proposal for meaningful privacy law: if all privacy is defined by a particular context, then any proposal to regulate can be opposed as sweeping too broadly to capture the particulars of the context.
The piece de resistance of the intellectual hijacking of privacy in context is the Obama Administration’s embrace of context in its report on the Consumer Privacy Bill of Rights. To be clear, this document provided some intriguing analyses about a new legislative framework for protecting privacy. But ultimately, it proved too watered down to be useful, and much of the watering down came by using the idea of context, shorn of the Nissenbaumian rigor.
The document advances a modified set of the FIPs. The key modification it proposes is to replace the lynchpin FIPs of “Purpose Specification” and “Use Limitation” by the new FIP of “Respect for Context.” This is a significant act of sub rosa dilution. Rather than being forced to tell consumers the purpose of data collection and abiding by those disclosures, under the proposed new FIP, companies would be allowed to use data for all sorts of unstated purposes, so long as they are “consistent with the context in which consumers provide the data.” If we read this as rigorously and formally as Nissenbaum would, this doesn’t seem like a distressing expansion. But the report betrays that it is using “context” in a much less formal, much more worrying way.
These acts of intellectual hijacking result in part from the way the ideas were originally expressed by their scholar-architects. Both “privacy on the ground” and “privacy in context” are expressly status quo enforcing, and the status quo of our legal system favors relatively unfettered corporate action at the expense of privacy. “Privacy on the ground” instructs us not to worry so much about formal protections for privacy because privacy is in better shape than the bare law suggests, thanks to the work of privacy professionals. “Privacy in context” urges us to avoid sweeping or universal accounts of privacy harm in the face of technological change—directing us instead to fight smaller skirmishes with fewer interested parties.
Even so, with a little extra work, these authors could have done a better job anticipating and blunting the threat of co-option. In this Essay, my first embrace of the thesaurus, let me try to do better: I would like to expressly imbue my word, forthrightness, with a resistance (if not immunity) to redefinition. This word belongs to me, I now assert, and I intend for it to be used precisely and without compromise. I hope to accomplish this goal in two different ways.
The first is by simple declaration. This word isn’t meant to be redefined. It is instead meant to have a specific and precise definition. If you use this word in a watered-down way, you are not using it correctly.
The second way that I hope to establish an unwavering definition is far less brash: I have tried to define the word to be free of intentionally vague and open-textured meaning, like context, reasonableness, or balance. A company speaks with forthrightness only if it communicates with its users in a completely honest, direct, and candid manner. Forthrightness is forthrightness in the abstract, not forthrightness in the given circumstances or forthrightness given competing interests.
E. Forthrightness Applied
I do not think forthrightness springs naturally from any current privacy laws. I’m not arguing for clever reinterpretations of positive law to suggest that we’ve had forthrightness hidden in plain sight all of this time. Instead, we need new legislation to create new obligations of forthrightness. I am not unaware of the challenges that the enactment of this kind of legislation would face, but at this time, I will not focus on the political viability of the proposal. Instead, I will assume we can surmount the obstacles to this kind of change and focus on what an obligation of forthrightness would entail.
Let us imagine that new federal legislation gives the FTC the ability to police a requirement that companies act with forthrightness regarding their data handling practices. Let us assume that this power will mirror the FTC’s current authority to police deception and unfairness without assuming any novel investigative tools or procedural requirements.
Given the expansiveness with which I am defining forthrightness, Congress might soften the bite of the law I am imagining by imposing relatively mild remedies for violations. A ruling that a company failed to act in a forthright manner might result in mild continuing oversight and reporting requirements and no financial penalty, at least for relatively minor violations.
In the alternative, we might end up with forthrightness without legislation. It could become the subject of private enforcement, particularly by powerful platforms. There is precedent for this kind of self-regulation, as Google’s search engine detects and warns users about websites it suspects of delivering malware or phishing, and most social networks refuse to buy ads for industries deemed especially subject to abuse, including payday lending and cryptocurrencies. Powerful platforms could similarly detect, mark, and even ban websites and apps that engage in non-forthright communications with users. For the remainder of this discussion, I will refer to a hypothetical law, but the analysis applies also to private action.
2. What Forthrightness Prohibits
What would forthrightness look like applied to the types of crises discussed earlier? Once again, forthrightness goes beyond anti-deception. As a parent, I think quite often about the distance between three spots on the continuum of veracity: not lying, being honest, and being forthright. “Did you break the vase?” elicits a reluctant and mumbled “yes” from the child who is not lying and perhaps a more elaborate explanation of the incident and regret from the honest child. The forthright child doesn’t need to be asked—he owns up to the violation as soon as you turn the key in the front door.
The forthright company would feel far more obligation to make affirmative statements about the way they use personal information and how these uses expose users to risk of harm. It would not permit Uber to use a system like Greyball, which silently deprives a user of the benefit of a service, without explaining the underlying reasons why. It would certainly not permit Facebook to allow third-party app developers to harvest sensitive information about the friends of users without a clear warning and opportunity to decline. Google’s harassing location reminders alone might not fail for want of forthrightness, but Google’s decision not to similarly harass those who have enabled location tracking to consider the risks of this surveillance and to shut it off might be deemed not forthright.
Forthrightness would render illegal many of the categories of dark patterns that Harry Brignull has identified. Every dialog box that seeks consent for tracking, perhaps spurred by a law that requires meaningful consent for data collection and uses such as the GDPR, will be subjected to this new standard of forthrightness. It will be considered not forthright to present a button marked “I accept” in a bright font and “I decline” in a faint, almost imperceptible alternative.
The treatment of dark patterns best demonstrates situations in which forthrightness might play a role that mere loyalty or information fiduciary duties do not. Because information fiduciary obligations do not apply to every participant in the information economy, they probably won’t apply to small, start-up purveyors of dark patterns. This allows users to be preyed upon by misleading and harmful techniques, at least until a company attains a particular threshold of size. Similarly, an obligation of loyalty might not extend to small entities.
In fact, I would argue that while duties of loyalty or information fiduciary obligations would fail to root out many dark patterns, almost all dark patterns are defined as lacking forthrightness. Because loyalty and information fiduciary obligations are far less sharply defined, it is likely that they will end up being seen as much more watered down than what is necessary. I am intentionally defining a label meant to be strict and onerous, not one easily weakened during political negotiation.
III. Forthrightness and Artificial Intelligence
Forthrightness might do even more work if we extend it to the roiling debates we have been having about the rise and spread of artificial intelligence. Machine learning techniques, abetted by the growth of massive databases of information full of evidence of human behavior, generate models that their developers claim can replace human decision makers in a wide variety of contexts. These debates go beyond information privacy to touch on fairness, discrimination, and due process. At bottom, these are debates about the future of decision-making, the outer limits of what we can claim to know, and (in its loftiest forms) the future of what it means to be human.
A. Harmful Artificial Intelligence
In a veritable explosion of scholarly activity, many writers have been focusing in recent years on the potential harms of the rise of powerful artificial intelligence techniques. These articles anticipate the continued spread of automated decision-making, with machine learning models replacing the roles played by human beings across private and public sectors. The issues that have been raised run a wide gamut of interrelated concerns.
Artificial intelligence techniques operate like a black box, subjecting those who are judged to opaque, mysterious decision-makers. When they replace public sector decision-makers, such as judges or administrative agencies, artificial intelligence decision-makers do not respect principles of due process.
The models these decision-makers generate replicate the mistakes embedded in the underlying data but in ways that are not easy to discern or correct. Machine learning techniques fall prey to a “garbage in, garbage out” phenomenon, leading to decisions that replicate the human biases hidden in the underlying data. Data sets tend to underrepresent or overrepresent populations that already suffer from relative powerlessness, such as the poor or racial minorities. This can lead to discrimination or unfairness.
Machine learning models tend to generate feedback loops, essentially operating like self-fulfilling prophecies. Cathy O’Neil documents examples of this tendency: employers refuse to hire applicants with bad credit scores, denying them the kind of financial boost they might use to make their bad credit scores better, and judges turn to complex models to determine whether a criminal defendant or convict is a recidivism risk, but these models tend to penalize racial minorities and poorer people, leading to denials of bail and longer prison sentences, dooming them to higher odds of repeat offending.
B. From FAT to Forthrightness
The most sophisticated thinking about constraining harmful artificial intelligence seems to have coalesced around three words: fairness, accountability, and transparency. The major academic conference in this space is named FAT*, an acronym of those three words. We could adapt the concept of forthrightness from information privacy to the regulation of AI to do work that none of these three values do.
Notice how the three components of FAT* focus on different actors and different stages in the lifecycle of an AI system. “Fairness” is generally discussed as a desirable attribute of the AI system itself. Because AI models make decisions about the world, we make the anthropomorphic move of requiring them to be fair. We talk less about the person developing or deploying the AI system as the one who must be fair.
Transparency also focuses on the system rather than on the entity deploying the system. It is a prayer for human understanding of the internal mechanisms within the AI system.
Transparency suggests some ability to “lift the hood” on the inner-workings of the AI system. In mathematical terms, transparency often amounts to suggestions for disclosing the algorithm and the data used to train the system.
Related to transparency are the attributes of explainability and interpretability, which have been the center of much attention of late. These also focus on the AI system itself. These approaches suggest that we can favor AI techniques that produce results revealing the “weights” that different variables are accorded. Another promising approach is permitting users to test scenarios—if I had this much less credit card debt, how would my credit rating change under this model, or if I answered this question in your survey in the negative, how would my risk-of-recidivism score have changed?
Accountability is the one attribute of the traditional trio that focuses more on the entity developing and deploying the AI model than the AI system itself. We want rules, akin to the due process obligations we impose on the government, to force these entities to bear responsibility for what they have created. We especially seek such rules when AI systems make mistakes, hoping they will encourage those who control the systems to detect and correct the errors.
Like accountability and unlike fairness or transparency, forthrightness focuses almost entirely on the decisions made by the entity—e.g., the company or government agency—that develops and deploys the AI system. Forthrightness is especially relevant in moments of failure, when an AI is revealed to be deeply flawed or prone to egregious errors.
C. Inoffensive but not Forthright Code
A new obligation of forthrightness would do the most work in moments of failure or disappointment. Consider a recent but already infamous example. In 2015, a software engineer noticed that Google’s image recognition software was labeling his black friends “Gorillas.” The firestorm was immediate, and the lasting impact on debates over AI has been profound. This example has been held out as a salient symbol of the way bias—and even racism—can creep into AI. It shows that as we increasingly make automated decisions with the assistance of AI, we will continue to face fraught questions about race and bias.
Earlier this year, Wired conducted a study suggesting that Google has responded to this incident in a manner that I find less than forthright. The article explains:
WIRED tested Google Photos [a year after the Gorilla incident] using a collection of 40,000 images well-stocked with animals. It performed impressively at finding many creatures, including pandas and poodles. But the service reported “no results” for the search terms “gorilla,” “chimp,” “chimpanzee,” and “monkey.”
It seems to be just those four words, words perhaps notorious for a troubled history of being hurled at people to strip them of their humanity. However, other primates passed this filter, including “baboon,” “gibbon,” “marmoset,” and “orangutan.” Google apparently chose not to fix the underlying problem but instead simply scrubbed its results.
This response was a failure, one captured neither by the FAT* formulation nor by hopes for interpretability. Let’s call it a failure to be forthright. A forthright approach to dealing with bias or outright racism would be to do the work necessary to improve the image detection algorithm itself.
I propose a distinction between two broad approaches to addressing failures like this. You can fix a broken AI by making it fundamentally better, or you can fix it by applying a thin veneer, one that masks its shortcomings from human users. Fixing Google’s image recognition service to make fewer mistakes is an example of the former. Allowing offensive identification but suppressing it from human view after detecting that it is offensive is an example of the latter.
Because I am not myself an AI engineer, at least not a very good one, I have proposed this distinction to experts over the past few years. Those I have asked uniformly agree that this distinction is tenable, even if the line between the two is a little fuzzy. Fundamentally fixing AI and applying a veneer are probably endpoints on a continuum.
I am deeply concerned about the thin-veneer approach such as the one taken by Google. To understand why, let’s examine what it means to search for a thin veneer. The first problem is that the thin veneer is just that—a veneer, a mask, a dodge—one that obscures a deeper, underlying problem. Thin veneers, if they work well, allow us to use decision-making that deep down inside would be making decisions that we would find troubling or offensive, without letting us in on what we do not know. I have a fundamental problem imagining a future world where essentially broken AIs—AIs that cannot make the decisions we want them to make—are trusted because we don’t know we should be concerned.
Second, I worry about the mission creep for bad AI this will allow. AI systems tend to be developed for a narrow purpose but expanded for broader use. Consider Cathy O’Neil’s terrifying book, Weapons of Math Destruction. In it, she chronicles the way that systems concocted for a narrow purpose—say, predicting recidivism of those arrested for a crime to help a judge come to a decision about pre-trial bail—might be used for a different, even more consequential purpose—like deciding how long a convicted person should be in prison. Another example is facial recognition software, which started out in use by intelligence agencies but has migrated to law enforcement use and, most recently, used by immigration officials at U.S. international airports.
My final objection to the thin-veneer approach stems from thinking about what it means to “solve” the problem of determining that which a human finds offensive. Don’t be confused by the Google Gorilla example—it seems like Google didn’t engage in any complex techniques to figure out that primates were offensive. It seems like they simply hard-coded four terms into an exclusion filter.
But it might be that decoding the mystery of human offensiveness turns out to be an interesting research question in its own right, one a company like Google or Facebook might see as a worthwhile investment of research time and resources. They will naturally be drawn to this because there is a direct connection between this work and their bottom line.
This is an extraordinarily dangerous undertaking. The AI researcher trying to crack the secret of offensiveness is building models that will help researchers design AIs that, deep down, are biased or racist or flawed in other ways but are good at hiding it from us. At the risk of sounding entirely conspiratorial, this seems like a modest step toward Skynet, the all-powerful AI from the Terminator movie series that came to understand humans to be the enemy. One way to organize this fear of thin-veneer AI masking the flawed beating heart of AI is to name it—but again, I don’t think “honesty,” “fairness,” “accountability,” or “transparency” quite captures it. I propose we refer to the use of thin-veneer approaches like this as a lack of forthrightness.
Would it be better if Google simply allowed the racist misidentifications? I’m not here to defend that position, but we need to recognize that there might be a silver lining to periodic moments of collective outrage such as the Gorilla misidentification. There is value in being offended by what an AI produces because it reminds us that this technology, which might be put in charge of lives and could even take away our jobs, will do this work in different ways, deploying uncanny, inhuman capabilities. The firestorm around the Gorilla result is the periodic jolt we need to remind ourselves of this fact.
Consider another firestorm sparked by the release of AI by a giant corporation. In 2016, Microsoft unleashed on the world a Twitter bot called “Tay,” for the acronym “thinking about you.” Trained to mimic the language patterns of a 19-year-old American girl, it was released to the public in March of 2016 and began interacting with the world.
Microsoft did not design Tay in a forthright manner. It gave Tay a blacklist of items it was not supposed to discuss, such as Eric Garner, the New York man who was choked to death by NYPD officers who were trying to arrest him on suspicion of stealing cigarettes. Apparently, Microsoft wanted its AI not to engage with discussions about this still-raw news story. Despite this self-protective nature, Tay very quickly began spewing racist and sexually-charged messages in conversations with other users. Within sixteen hours, Microsoft suspended the account, ultimately taking it offline.
I contend that Google and Microsoft have learned the wrong lessons from these incidents. Shortly after the Gorilla misidentification, a top Google AI researcher tweeted that “We’re also working on longer-term fixes around both linguistics (words to be careful about in photos of people [lang-dependent].” He immediately followed up with a second tweet that read, “and image recognition itself. [sic] (e.g., better recognition of dark-skinned faces),” thereby crystallizing the thin veneer and the deeper solution in a tidy pair of tweets. Microsoft responded that “[W]e’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”
These responses are not forthright. They are thin-veneer responses when fundamental responses are needed. Google should develop an image recognition service that accurately recognizes humans as humans, not animals. Microsoft’s problem is deeper. If the problem it was trying to solve was to develop an AI that sounds like the humans on Twitter, then it succeeded too well. It might be that there isn’t a forthright way to approach that problem. The proper response is not to try to sound like humans on Twitter!
D. AI Canaries in Coalmines
Others might respond that what I am dismissing as not forthright should instead be celebrated as ethical corporations exercising discretion. They might even point back to my parenting analogy—we teach our children to be forthright within the family but to exercise greater discretion around friends and even more around strangers. Discretion prepares our children for social acceptance, which is all Google and Microsoft are attempting to do.
There is danger in exercising this kind of discretion with powerful AI systems. The thin veneer masks something important going on beneath its surface. We should consider offensive AI results to be canaries in the coal mine signaling the inhumanity of AI. We surely shouldn’t celebrate the racist and misogynistic behaviors of AIs like Google’s image recognition and Microsoft’s Tay. But we shouldn’t squelch them either, not if squelching requires the investment of rare and valuable resources—the time and brainpower of AI researchers—toward what we should see as socially harmful work.
I am not proposing regulation to enforce forthright AI, at least not yet. The concept of forthrightness I am proposing isn’t fully developed enough to reduce to firm rules or standards. I am instead making an appeal to policymakers and other engaged members of the public, in particular, the humans who help companies understand how to use AI in an ethical manner, as well as to the public thought leaders who can bring social pressure to bear on companies who miss this mark.
I am also not trying to limit what happens behind closed doors within AI research labs. The single mistake for both Google and Microsoft was the decision to release its AI on the world. It was the public’s interactions with Google’s racist image matching and Microsoft’s bot’s hate speech that caused these firestorms. Google and Microsoft should have been allowed to cultivate these AIs—ethical warts and all—within smaller, closed communities.
But companies deciding whether to unleash a new AI on the world need to understand that only some of their choices are properly considered forthright AI. Companies can unleash the AI on the world, taking the risk that it will do something that subjects it and the company to criticism. If it is not comfortable taking on this risk, companies should limit the scale of the release, sharing it only with testers who expect the risk and thus will not take as much offense or will not mistake the behaviors of the AI for the attitudes of the company.
There is, of course, one more choice for the company seeking to balance the risk of offending the public with the reward of AI advances. Sometimes, they should decide that the risk is so high, the only proper choice is “not to play.” Too often, the burgeoning field of AI ethics seems to be narrowly focused on finding the most ethical version of a given AI system. A forthright approach would ask more often, can we release this without being accused of being racist or offensive or terrible? If not, we should delve past the thin veneer, seeking the forthright answer. And if that’s not possible, let’s solve a different problem.
The two conceptions of forthrightness spelled out in the two halves of this Essay are not identical. The way forthrightness plays out in consumer protection and artificial intelligence overlap but diverge from one another in some important ways. They both speak, however, to the need to begin to place more responsibility on the backs of the technology companies that provide digital products and services. The many revolutions of the past two decades—the Internet, the smartphone, the cloud, the Internet of Things, advances in artificial intelligence—has created as much harm as good to date, at least as I see the world. Unfettered, unwatched companies too often choose paths that sacrifice individual stability and well-being. We need to restore the power of the individual if we hope to restore some of what we’ve lost. Forthrightness won’t work a revolution, but it’s a step in the right direction.
Infra Section II.B.2. See Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies 126 (2018).
See Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Cal. L. Rev. 671, 683–84 (2016).
See Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 680 (2017); Deven R. Desai & Joshua A. Kroll, Trust but Verify: A Guide to Algorithms and the Law, 31 Harv. J.L. & Tech. 1, 6–7 (2017).
See, e.g. , Joseph Turow, The Daily You: How the New Advertising is Defining your Identity and Your Worth 80–81, 173–75 (2011); cf. Antonio García Martínez, Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley 388 (2016) (expressing the effectiveness with which companies like Facebook and Google use personal information to target users for marketing).
See, e.g., Siva Vaidhyanathan, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy 130–33, 175, 181 (2018).
See Danielle Keats Citron, Hate Crimes In Cyberspace 226–33 (2014).
I owe a great debt for this Essay to Woody Hartzog’s important new book. Hartzog, supra note 1, at 52, 74, 139, 161.
Hartzog, supra note 1, at 21–22; Press Release, Fed. Trade Comm’n, Snapchat Settles FTC Charges That Promises of Disappearing Messages Were False, (May 8, 2014), https://www.ftc.gov/news-events/press-releases/2014/05/snapchat-settles-ftc-charges-promises-disappearing-messages-were [http://perma.cc/NW5U-Q4E7] [hereinafter Press Release, Snapchat Settles FTC Charges].
See Hartzog, supra note 1, at 21–22.
Complaint at 1–2, In re Snapchat Inc., No. C-4501 (F.T.C. Dec. 23, 2014) [hereinafter Snapchat Complaint].
Id. at 3.
Id. at 2.
Id. at 3–4.
See, e.g., Matthew Rosenberg et al., How Trump Consultants Exploited the Facebook Data of Millions, N.Y. Times (Mar.17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [http://perma.cc/Q44U-KFQZ].
See, e.g., Keith Collins, Google Collects Android Users’ Locations Even When Location Services Are Disabled, Quartz (Nov. 21, 2017), https://qz.com/1131515/google-collects-android-users-locations-even-when-location-services-are-disabled/ [http://perma.cc/TL2S-EN45].
Cf. Brittany McGhee, How to Stop Android Apps from Accessing Your Location, AndroidPIT, https://www.androidpit.com/how-to-stop-android-apps-accessing-your-location [http://perma.cc/ESC3-NKG4] (last visited Nov. 2, 2018).
Hartzog, supra note 1, at 142–46.
Id. at 147, 208.
Gregory Conti & Edward Sobiesk, Malicious Interface Design: Exploiting the User, 19 Int’l World Wide Web Conf. 271, 272 (2010).
Brignull, supra note 2.
Id.; Conti & Sobiesk, supra note 21, at 272.
Conti & Sobiesk, supra note 21, at 272.
Examples of confirmshaming are collected on their very own Tumblr. See, e.g., Confirmshaming, Tumblr, http://confirmshaming.tumblr.com/post/173063603175/the-very-own-turn-off-your-ad-block-message [http://perma.cc/S7CP-GRTY] (last visited Nov. 2, 2018).
See, e.g., Devin Coldewey, Study Calls out ‘Dark Patterns’ in Facebook and Google that Push Users Toward Less Privacy, TechCrunch (June 27, 2018), https://techcrunch.com/2018/06/27/study-calls-out-dark-patterns-in-facebook-and-google-that-push-users-towards-less-privacy/ [http://perma.cc/5PXK-63ZW]; Christopher Mims, Who Has More of Your Personal Data Than Facebook? Try Google, Wall St. J. (Apr. 22, 2018, 8:00 AM), https://www.wsj.com/articles/who-has-more-of-your-personal-data-than-facebook-try-google-1524398401 [http://perma.cc/ZV8U-PAUS] (discussing Google’s dark patterns).
Two other possibilities are “sleaze code” and “evil patterns.”
Mike Isaac, How Uber Deceives the Authorities Worldwide, N.Y. Times (Mar. 3, 2017), https://www.nytimes.com/2017/03/03/technology/uber-greyball-program-evade-authorities.html [http://perma.cc/M6XQ-GCCT].
Rebecca Hill, Facebook Admits: Apps Were Given Users’ Permissions to Go into Their Inboxes, Register (Apr. 11, 2018), https://www.theregister.co.uk/2018/04/11/facebook_admits_users_granted_apps_permission_to_go_into_their_inboxes [http://perma.cc/JS94-MZWK].
Gabriel J.X. Dance et al., Facebook Gave Device Makers Deep Access to Data on Users and Friends, N.Y. Times (June 3, 2018), https://www.nytimes.com/interactive/2018/06/03/technology/facebook-device-partners-users-friends-data.html [http://perma.cc/F7JU-WQA6].
Michelle Madejski et al., A Study of Privacy Settings Errors in an Online Social Network, 2012 IEEE Int’l Conf. on Pervasive Computing & Comms. Workshops 340, 344 (2012); Guilbert Gates, Graphic: Facebook Privacy: A Bewildering Tangle of Options, N.Y. Times (May 13, 2010), https://archive.nytimes.com/www.nytimes.com/interactive/2010/05/12/business/facebook-privacy.html [http://perma.cc/L8MC-VJHB].
Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach, Guardian (Mar. 17, 2018), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election [http://perma.cc/7YFJ-BGLR].
Infra Section II.B.1.
Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data and Repealing Directive 95/46/EC (General Data Protection Regulation), arts. 1–3, 99, 2016 O.J. (L 119) 32–33, 87–88 [hereinafter GDPR].
Cf. Paul M. Schwartz, Preemption and Privacy, 118 Yale L.J. 902, 932 (2009).
15 U.S.C. § 45(a)(1) (2012). This provision expressly does not apply to a few industries: “banks, savings and loan institutions . . . , Federal credit unions . . . , common carriers . . . , air carriers and foreign air carriers . . . , and [those] subject to the Packers and Stockyards Act.” Id. § 45(a)(2).
Infra Section II.B.3.
Infra Section II.B.4.
See GDPR, supra note 39, at arts. 1–3.
Id. at 1.
Id. at 11; Miguel Otero Vaccarello, General Data Protection Regulation, Medium (Feb. 6, 2018), https://medium.com/@miguelovaccarello/general-data-protection-regulation-quick-guide-82445b6931e1 [http://perma.cc/3VLY-D5LP].
William McGeveran, Friending the Privacy Regulators, 58 Ariz. L. Rev. 959, 1020 (2016).
Directive 95/46/EC, of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 5, 1995 O.J. (L 281) 39.
See GDPR, supra note 39, at art. 28.
Guidance on Upcoming New Data Protections Rules Across the EU, EUR-Lex, https://eur-lex.europa.eu/content/news/guidance-for-general-data-protection-regulation-application.html [http://perma.cc/2GX3-8Q3F] (last visited Nov. 2, 2018).
Full disclosure—I served as a Senior Policy Advisor for the FTC for a little less than a year. See Paul Ohm, Geo. Univ. Law Ctr., https://www.law.georgetown.edu/faculty/paul-ohm/ [http://perma.cc/SX7B-9XJR] (last visited Nov. 2, 2018).
For the definitive overviews of the FTC’s privacy work, see generally Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy (2016). See generally Daniel J. Solove & Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114 Colum. L. Rev. 583 (2014).
15 U.S.C. § 45 (2012).
Letter from Michael Pertschuk et al., Chairman, Fed. Trade Comm’n, to Hon. Wendell H. Ford, Chairman, Consumer Subcomm., Comm. on Commerce, Sci., & Transp. (Dec. 17, 1980), https://www.ftc.gov/public-statements/1980/12/ftc-policy-statement-unfairness [http://perma.cc/8ALW-CPSC].
Solove & Hartzog, supra note 54, at 639–40.
Id. at 628–38.
Press Release, Fed. Trade Comm’n, Eli Lilly Settles FTC Charges Concerning Security Breach (Jan. 18, 2002), https://www.ftc.gov/news-events/press-releases/2002/01/eli-lilly-settles-ftc-charges-concerning-security-breach [http://perma.cc/FEG6-7U9Q].
Complaint, In re Eli Lilly & Co., No. C-4047 (F.T.C. May 8, 2002).
See FTC Policy Statement on Deception, reprinted in In re Cliffdale Assocs., Inc., 103 F.T.C. 110, 174 (1984) (appending Letter from James C. Miller, III, Chairman, Fed. Trade Comm’n, to Hon. John D. Dingell, Chairman, Comm. on Energy & Commerce (Oct. 14, 1983)), https://www.ftc.gov/system/files/documents/public_statements/410531/831014deceptionstmt.pdf [http://perma.cc/CSM2-Z4VX] (defining a deceptive act or practice as a material “representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances”).
E.g., Complaint at 3, In re Epic Marketplace, Inc., No. C-4389 (F.T.C. Mar. 13, 2013).
Hartzog, supra note 1, at 135.
Press Release, Snapchat Settles FTC Charges, supra note 9.
Snapchat Complaint, supra note 11, at 4.
Id. at 2.
Jack M. Balkin, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev. 1183, 1209 (2016); Jack M. Balkin & Jonathan Zittrain, A Grand Bargain to Make Tech Companies Trustworthy, Atlantic (Oct. 3, 2016), https://www.theatlantic.com/technology/archive/2016/10/information-fiduciary/502346/ [http://perma.cc/KB72-A6XN]; Jack M. Balkin, Information Fiduciaries in the Digital Age, Balkinization (Mar. 5, 2014), https://balkin.blogspot.com/2014/03/information-fiduciaries-in-digital-age.html [http://perma.cc/9PJQ-YC2S] [hereinafter Balkin, Digital Age].
Balkin, Digital Age, supra note 71.
Balkin & Zittrain, supra note 71.
Balkin, Digital Age, supra note 71.
Balkin & Zittrain, supra note 71.
E.g., Jonathan Zittrain, The Future of the Internet: And How to Stop It 128 (2008).
Neil Richards & Woodrow Hartzog, Taking Trust Seriously in Privacy Law, 19 Stan. Tech. L. Rev. 431, 468 (2016).
Id. at 470.
Forthright , Oxford Pocket Dictionary of Current English (11th ed. 2009).
Hartzog, supra note 1, at 110 (“[P]rivacy law should ask whether a particular design interferes with our understanding of risks or exploits our vulnerabilities in unreasonable ways with respect to our personal information.”).
Hartzog, supra note 1, at 75–77.
Richards & Hartzog, supra note 78, at 33.
Hartzog, supra note 1, at 75–77.
Id. at 76.
See Brignull, supra note 2; Madejski et al., supra note 35.
Daniel J. Solove, Conceptualizing Privacy, 90 Cal. L. Rev. 1087, 1089 (“Time and again philosophers, legal theorists, and jurists have lamented the great difficulty in reaching a satisfying conception of privacy.”).
Alan Westin, Privacy and Freedom 7 (1967) (“Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.”); Charles Fried, Privacy, 77 Yale L.J. 475, 482 (1968) (“Privacy is not simply an absence of information about what is in the minds of others; rather it is the control we have over information about ourselves.”).
Daniel J. Solove, Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880, 1880 (2013) (connecting “notice, access, and consent” legal regimes to concerns about control); see also Fed. Trade Comm’n, Protecting Consumer Privacy in an Era of Rapid Change, at i (2012), http://ftc.gov/os/2012/03/120326privacyreport.pdf [http://perma.cc/BDH4-QYJX]; White House, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy 47 (2012), https://www.hsdl.org/?view&did=700959 [http://perma.cc/Y4YF-896L].
Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1423 (2000).
Julie C. Inness, Privacy, Intimacy, and Isolation 56 (1992).
James Q. Whitman, The Two Western Cultures of Privacy: Dignity Versus Liberty, 113 Yale L.J. 1151, 1161 (2004).
See Cohen, supra note 95, at 1426.
See Julie E. Cohen, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice 149 (2012).
Inness, supra note 96, at 56; Jeffrey Rosen, The Unwanted Gaze: The Destruction of Privacy in America 8–9 (2000).
Paul M. Schwartz, Privacy and Democracy in Cyberspace, 52 Vand. L. Rev. 1609, 1648 (1999).
See Helen Nissenbaum, Privacy in Context: Technology, Privacy, and the Integrity of Social Life 103, 140 (2009).
Id. at 129.
Id. at 140.
Balkin & Zittrain, supra note 71.
Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe 241 (2015).
Id. at 8.
Solove & Hartzog, supra note 54, at 627.
Richards & Hartzog, supra note 78, at 447, 468.
Bamberger & Mulligan, supra note 106; Nissenbaum, supra note 102, at 150, 235–37.
See Julie Brill, Comm’r, Fed. Trade Comm’n, Data Protection and the Internet of Things: Keynote Address for Euroforum European Data Protection Days, at 5 (May 4, 2015), https://www.ftc.gov/system/files/documents/public_statements/640741/2015-05-04\euroforum_iot_brill_final.pdf [http://perma.cc/BZ29-MZFJ] (former FTC Commissioner citing “privacy on the ground”); Alexis Madrigal, The Philosopher Whose Fingerprints Are All Over the FTC’s New Approach to Privacy, Atlantic (Mar. 29, 2012), https://www.theatlantic.com/technology/archive/2012/03/thePphilosopherPwhosePfingerprintsParePallPoverPthePftcsPnewPapproachPtoPprivacy/254365 [http://perma.cc/YG57-Z3Q7] (discussing influence of Nissenbaum’s “Privacy in Context”).
Omer Tene, The U.S.-EU Privacy Debate: Conventional Wisdom is Wrong, Int’l Ass’n Privacy Prof’ls (Mar. 4, 2014), https://iapp.org/news/a/the-u-s-eu-privacy-debate-conventional-wisdom-is-wrong/ [http://perma.cc/4TY3-KZPB].
See Amended and Restated By-Laws of the International Association of Privacy Professionals, Inc., Int’l Ass’n Privacy Prof’ls, https://iapp.org/media/pdf/about_iapp/IAPP Bylaws - Amended JUNE 2015.pdf [http://perma.cc/6G2Q-BRMG] (last visited Nov. 2, 2018).
Doron S. Goldstein, Understanding the EU-US “Privacy Shield” Data Transfer Framework, 20 J. Internet L. 17, 21 (2016).
Id. at 17.
See Sidley Austin L.L.P., Essentially Equivalent: A Comparison of the Legal Orders for Privacy and Data Protection in the European Union and the United States 152–55 (2016), https://datamatters.sidley.com/wp-content/uploads/2016/01/Essentially-Equivalent-Final-01-25-16-9AM3.pdf [http://perma.cc/6YZV-AKAW].
Bamberger & Mulligan, supra note 106, at 8.
Nissenbaum, supra note 102, at 147–50.
White House, supra note 94, at 9–10.
Marcia Hoffman, Obama Administration Unveils Promising Consumer Privacy Plan, but the Devil Will Be in the Details, Elec. Frontier Found. (Feb. 23, 2012), https://www.eff.org/deeplinks/2012/02/obama-administration-unveils-promising-consumer-privacy-plan-devil-details [http://perma.cc/QA7Z-BU3A] (“EFF applauds the principles underlying the White House proposal and believes it reflects an important commitment to safeguard users’ data in the networked world without stifling innovation.”).
See Brendan Sasso, Obama’s ‘Privacy Bill of Rights’ Gets Bashed from All Sides, Atlantic (Feb. 27, 2015), https://www.theatlantic.com/politics/archive/2015/02/obamas-privacy-bill-of-rights-gets-bashed-from-all-sides/456576/ [http://perma.cc/33LY-M2A9] (collecting critiques of legislation proposed to implement report’s recommendations).
White House, supra note 94, at 9–10.
Id. at 15–19 (proposing a “Respect for Context” FIP).
Id. at 1.
See id. at 16 (“[W]hile [the Respect for Context] principle emphasizes the importance of the relationship between a consumer and a company at the time consumers disclose data, it also recognizes that this relationship may change over time in ways not foreseeable at the time of collection.”).
See Bamberger & Mulligan, supra note 106, at 8.
See Nissenbaum, supra note 102, at 142.
See Shannon Liao, Twitter Will Start Banning Cryptocurrency Ads Tomorrow, Verge (Mar. 26, 2018, 12:44 PM), https://www.theverge.com/2018/3/26/17164426/crypto-twitter-ban-bitcoin-cryptocurrency-ads [http://perma.cc/9ZJ7-C3UC].
See Isaac, supra note 29.
See Rosenberg, supra note 16.
Brignull, supra note 2.
See supra Section II.B.3.
See supra Section II.B.4.
For a review of only some of this literature, see David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 655–69 (2017).
See Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information 57 (2015).
See Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 19 (2014).
See Barocas & Selbst, supra note 3, at 683; see also Lehr & Ohm, supra note 135, at 710–11.
Barocas & Selbst, supra note 3, at 683–84.
See Kate Crawford, Think Again: Big Data, Foreign Pol’y (May 10, 2013, 12:40 AM), http://www.foreignpolicy.com/articles/2013/05/09/think_again_big_data [http://perma.cc/J99X-27SX].
See Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 29 (2016).
See id. at 7.
See id. at 25–26.
See ACM Conference on Fairness, Accountability, and Transparency (ACM FAT), ACN FAT Conf. (Oct. 23, 2018), https://fatconference.org/ [http://perma.cc/4S8D-2CFK]. The conference used to be known as FATML, for “Fairness, Accountability, and Transparency in Machine Learning,” but it changed its name to FAT* to encompass species of artificial intelligence beyond machine learning. Cf. Fairness, Accountability, and Transparency in Machine Learning, FAT/ML, http://www.fatml.org [http://perma.cc/N85V-H2W6] (last visited Nov. 2, 2018).
It’s important to avoid anthropomorphizing artificial intelligence. Because “intelligence” is baked right into the phrase, and because we’ve all been subjected to science-fiction tales of humanoid robots with distinct personalities, we tend to craft prescriptions that try to map human values and attitudes on computer programs. The FAT acronym seems to exemplify this tendency well because the three letters run the gamut of possibility of applying to the running code, the human operator, or both. Transparency seems mostly to apply to the AI: the running code or the machine learning algorithm. We worry about the switch to AI-style reasoning because it is difficult, or maybe even impossible, sometimes to lift the lid on the black box of decision-making to reveal what is happening for human review. Accountability seems mostly directed at the human operator. Is the human system in which the AI plays a role being held accountable for the way it decides to fire a schoolteacher, deny an indicted person’s request for bail, or autonomously make a right-hand turn? Fairness is the trickiest because it seems to apply to the human, the machine, and the system, all at the same time. Forthrightness is a value of humans, so I will do my best not to anthropomorphize the code; although, I am sure I will slip. I am going to try to focus on ensuring that the human beings (and the system of which they are a part) are making human decisions about the use and deployment of AI that is forthright.
See Citron & Pasquale, supra note 137, at 18–27.
Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. (forthcoming 2018) (manuscript at 31), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3126971 [http://perma.cc/MWG7-34S2]; see also Michael Gleicher, A Framework for Considering Comprehensibility in Modeling, 4 Big Data 75, 77 (2016); Riccardo Guidotti et al., A Survey of Methods for Explaining Black Box Models, https://arxiv.org/pdf/1802.01933.pdf [http://perma.cc/EA5G-3EDA] (forthcoming).
See Selbst & Barocas, supra note 147 (manuscript at 26–33).
See @jackyalcine, Twitter (June 28, 2015, 6:22 PM), https://twitter.com/jackyalcine/status/615329515909156865 [http://perma.cc/3MSB-PSPY].
See Katharine Schwab, The Dead-Serious Strategy Behind Google’s Silly AI Experiments, Fast Co. (Dec. 1, 2017) https://www.fastcompany.com/90152774/the-dead-serious-strategy-behind-googles-silly-ai-experiments [http://perma.cc/ZXS2-PRMS] (discussing Google’s “Gorilla” incident that sparked debate about whether AI could be unbiased, and the steps that Google is taking to show that they are a responsible developer).
Tom Simonite, When It Comes to Gorillas, Google Photos Remain Blind, Wired (Jan. 11, 2018, 7:00 AM), https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/ [http://perma.cc/XW2N-ALWX].
See O’Neil, supra note 141.
See id. at 7, 25–26.
See Harrison Rudolph et al., Not Ready for Takeoff: Face Scans at Airport Departure Gates, Geo. L. Ctr. on Privacy & Tech. (Dec. 21, 2017), https://www.airportfacescans.com/ [http://perma.cc/8SBV-EEJ6].
Terminator 2: Judgment Day (Carolco Pictures 1991).
James Vincent, Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less Than a Day, Verge (Mar. 24, 2016, 6:43 AM), https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist [http://perma.cc/FT7H-2H4S].
See Liam Tung, Microsoft’s Tay AI Chatbot Goes Offline After Being Taught to Be a Racist, ZDNet (Mar. 25, 2016, 12:53 PM), https://www.zdnet.com/article/microsofts-tay-ai-chatbot-goes-offline-after-being-taught-to-be-a-racist/ [http://perma.cc/5G9H-7CJT].
@yonatanzunger, Twitter (June 29, 2015, 11:19 AM), https://twitter.com/yonatanzunger/status/615585375487045632 [http://perma.cc/8SL6-EV69].
@yonatanzunger, Twitter (June 29, 2015, 11:21 AM), https://twitter.com/yonatanzunger/status/615585776110170112 [http://perma.cc/Y4ZQ-C3R2].
David Murphy, Microsoft Apologies (Again) for Tay Chatbot’s Offensive Tweets, PCMag (Mar. 25, 2016, 7:17 PM), https://www.pcmag.com/news/343223/microsoft-apologizes-again-for-tay-chatbots-offensive-twe [http://perma.cc/X62L-TE5F].
War Games (United Artists 1983).