I Smell a Bot:[1] California’s S.B. 1001, Free Speech, and the Future of Bot Regulation
I. Introduction
Among the most revolutionary changes wrought by wide-spread internet use is the disruption of hierarchies within the marketplace of ideas.[2] In the age of social media egalitarianism, individual viewpoints are far less constrained by the gatekeeping of traditional media. On one hand, this new reality has benefits—a lack of top-down control allows for more consumer choice and fosters a more diverse range of voices. But there are also downsides—while media was once primarily legitimized by the editorial process, the credibility of content is now tied to a populist notion of virality. As Americans increasingly rely on online peer-to-peer interaction for accessing information and engaging in commerce, an emerging technology, online bots, may be used to exploit trust in online identities, defraud consumers, and stoke political tensions.[3]
In reaction, California lawmakers passed Senate Bill No. 1001 (“S.B. 1001”), a transparency law that regulates the use of bots but also raises unique First Amendment questions.[4] The purpose of this Comment is to explore the free speech issues raised by S.B. 1001, evaluate the likelihood that the law survives a constitutional challenge, and suggest ways to improve future bot regulation. Part II will define bots, discuss why their regulation may be necessary, and address the competing approaches to such regulation. Next, Part III will analyze First Amendment issues that may arise should S.B. 1001 be challenged in court. Finally, Part IV will make two recommendations. First, lawmakers could avoid the appearance of expressly targeting political content (reducing the risk of a successful free speech challenge) and retain the democratic benefits of S.B. 1001 by expanding the commercial prong of the statute. Second, lawmakers could better achieve the goal of regulating bots by moving beyond transparency. Overall, this Comment concludes that, while S.B. 1001 may have the effect of chilling some expression, the law has an overall net-positive effect on the marketplace of ideas and will likely survive a First Amendment challenge in its current form. However, this Author believes that a regulatory scheme based on transparency alone may prove inadequate in the long-term.
II. Framing The Issue
A. Defining Bots
In general terms, bots are automation programs.[5] Whereas robots are hardware programmed to perform automated tasks in physical space, bots are software programmed to perform automated tasks in digital space.[6] Bots come in different shapes and sizes; for example, web crawlers provide search results,[7] social media bots create automated posts,[8] and helper bots guide users toward solutions.[9] While some bots may interface exclusively with other computer systems, and others may communicate directly with human users, all bots have one thing in common—they act as agents in service of their human controllers.[10]
Though they may be programmed with artificial intelligence (AI), many bots can perform complex automation without the use of machine learning or other algorithms. For example, chatbots, programs that “engage[] with users by replicating natural language conversations,”[11] do not require AI to replicate human conversation; many use “pattern matching,” a process whereby a program is given “rules to govern the system’s response to given inputs.”[12] MIT professor Joseph Weizenbaum, the creator of ELIZA, an early conversation program developed between 1964 and 1966, compares ELIZA’s pattern matching to an actor using a script to “improvise around a certain theme.”[13] Because chatbots do not require advanced technology to functionally interface with humans, and because today’s technology users are especially comfortable with chat functions, some technology experts foresee chatbots playing a significant role in the future of user interfaces more generally.[14]
As a broad category, bots may seem an unwieldy technology use case for government regulation. However, S.B. 1001 provides a narrower view of bots, defining them as “automated online account[s] where all or substantially all of the actions [performed] or posts [generated] are not the result of a person.”[15] This definition does not identify a bot by the nature of its software, but instead by its outward appearance and, implicitly, by its purpose. Though “online accounts” generally serve an identification purpose for system administrators, social media and online shopping accounts in particular have popularized the use of “profiles” for user identification.[16] Commonly, an online human is identifiable by a screen name, profile picture, short bio, and possibly a few other identifying pieces of information.[17] The purpose of S.B. 1001 is to regulate accounts that have those basic features and may, at first glance, seem human, but produce automated content.[18] Importantly, California’s bot definition allows for at least some amount of direct human control.
B. Current Issues and Proposed Regulation
Although bots have existed on the internet since the birth of Internet Relay Chat in 1988,[19] their proliferation throughout social media is now under the microscope following reports by U.S. intelligence services that foreign agents used social media to influence the 2016 U.S. presidential election.[20] The present legal and ethical conversation concerning bots is in reaction to their effect on social media, but it has become increasingly clear that bots’ influence may extend to other commercial and political environments.[21] This Section will discuss problems with unregulated bots and proposed regulations.
1. Why Regulate Bots?
Bots are problematic for the same reason that they are useful—they amplify the power and efficiency of a single person.[22] Because the human ability to use computer systems is limited by the ability to physically input commands,[23] bots can be programmed to perform computer-based tasks more quickly and reliably than a human without access to such technology.[24] Even without the use of advanced AI technology, a single bot’s increased efficiency relative to a group of human users makes bot use lucrative and ripe for abuse.[25] Specifically, experts express concerns that bots can be used to fraudulently manufacture consensus and defraud consumers.[26]
Manufactured Consensus. First, research shows that internet communities are effective environments for building consensus among individual users,[27] and bots can be used to manipulate that effect. The major social networks, in particular, are founded on the idea that organic sharing can have a viral effect, diffusing content to an exponential number of users beyond the original user’s immediate network of friends (on Facebook) or followers (on Twitter).[28] Because it is possible for a single independent producer to widely distribute content based on shares and trending tags, bots programmed to share specific content can be used “to create a bandwagon effect, to build fake social media trends . . . and even to suppress the opinions of the opposition.”[29] And this effect is not limited to social media ecosystems; one particularly worrisome example of non-social-media bot influence occurred on a federal website.[30] During the Federal Communications Commission’s period for public comment on recent net neutrality rulemaking, bots contributed hundreds of thousands of comments in support of positions that had relatively little public support.[31] In a different vein, the popularity and relative power of commercial review sites such as Rotten Tomatoes and Yelp have made such sites targets for bot influence.[32] Other sensitive online processes may be targeted by similar efforts.[33]
Fraudulent Bots. Bots may also be used to defraud consumers looking to purchase the kind of online influence described above. The value of the actual bandwagon effect aside, the mere appearance of organic impressions is a valuable commodity in internet commerce but is often fraudulently produced via bot technology.[34] In a recent interview, Robert Hertzberg, a California State Senator and author of S.B. 1001, gave three examples of how bots can be used to defraud consumers.[35] First, services that promise to help users or businesses grow their social media reach may use bot accounts to pad follower or subscriber counts.[36] For example, Instagram has tacitly allowed the buying and selling of followers and likes, though it may now be using its own “good” AI to weed out the fakes.[37] Second, companies seeking an IPO or individuals seeking employment as influencers may misrepresent the scope of their organic reach by providing follower or subscriber counts padded by bots.[38] Third, bot-boosted false news stories can be used to increase ad revenue for their hosts.[39]
As is the case with many regulated technologies, dangers inherent to the technology are magnified by external factors—here, the growth of internet commerce and the commodification of personal data.[40] As we do more business, store more data, and live more of our lives online,[41] marketers, governments, and influencers benefit from a treasure trove of information previously unavailable or too difficult to collect in the physical world.[42] Each link clicked, product placed in a shopping cart, and “follow” is valuable when aggregated.[43] Because the value of our personal data is so high, the largest social networks and retailers are incentivized to automate user interactions to efficiently collect as much as possible.[44]
2. California S.B. 1001
On September 28, 2018, Governor Jerry Brown signed Senate Bill 1001,[45] making California the first U.S. jurisdiction to promulgate a bot law.[46] The substantive portion of the Bill, which went into effect in July 2019, reads as follows:
It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.[47]
S.B. 1001 represents one possible method for regulating bots: state-compelled bot-user transparency.
First, S.B. 1001 is state legislative action.[48] Courts have acknowledged the internet as an “instrument of interstate commerce,”[49] making regulation of its activities, presumably including the use of bots, subject to Congress’s authority under the Commerce Clause.[50] However, several states have promulgated legislation to control internet communication in the past, specifically targeting sexually explicit materials and “spam.”[51] Although state-based regulation of the internet raises very interesting federalism questions, a Dormant Commerce Clause analysis is a topic best left for another comment.
Next, S.B. 1001 targets bot “users.” Maybe the most important policy choice for lawmakers is where to focus bot-control efforts—the individual programmers responsible for a bot’s creation, the end users of the software, or the online sites and social networks on which bots operate? The ultimate decision by California lawmakers to target users reflects some of the practical hurdles of regulating the internet more generally. On one hand, a platform-focused approach may require a more coordinated legal effort because many institutional barriers exist to protect online service providers.[52] On the other hand, focusing on creators and users of bots is more direct; though, achieving proper jurisdiction over individual internet users may be extremely difficult.[53] Notably, S.B. 1001 makes it unlawful for those who “use” bots but does not create liability for those who create bots that are then used for purposes deemed unlawful under the Bill.[54]
Finally, S.B. 1001 is a transparency law. S.B. 1001 requires that the disclosure imposed by the law be “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates . . . .”[55] Regulating bots through user transparency is enticing to lawmakers; on principle, transparency provides balance between the politically polarized American values of freedom of contract and consumer protection.[56] But there are limits to transparency’s effectiveness—some commentators argue that “transparency places too much of the burden on users to understand the information being disclosed to them and to take appropriate responsive actions.”[57] Additionally, transparency regimes with “one-size-fits-all applicability” may cause unavoidable First Amendment challenges.[58] But despite the benefits and costs of transparency, does society actually benefit from increased transparency alone? In his recent article titled Forthright Code, Paul Ohm suggests imposing a stronger standard that “focus[es] on the actions of the entity developing or using the AI system, rather than [simply] describ[ing] a desirable attribute of the system itself”—forthrightness.[59] In the context of AI and bots, creating an “obligation of forthrightness”[60] may even involve establishing “information fiduciar[y]” relationships between the large internet platforms and consumers.[61]
III. First Amendment Analysis
According to California Senate committee reports, early critics of S.B. 1001 were particularly concerned with how the regulation may stifle free speech.[62] Since passage, Robert Hertzberg, the Bill’s author, has expressed confidence that the final Bill would pass constitutional muster.[63] Notwithstanding those changes, S.B. 1001 still presents two surface-level issues: First, the law places limitations on how bots, and their users, by extension, can communicate with California citizens; second, S.B. 1001 compels bots, and their users, again by extension, to speak.[64] These issues are further complicated by the machine-generated qualities of the regulated conduct and varying levels of bot autonomy.
Because U.S. courts have not yet ruled on a First Amendment bot-speech controversy,[65] this Comment will consider caselaw from adjacent areas of First Amendment jurisprudence as well as scholarly suggestions of how courts may handle the emerging world of AI and technology law. In order to analyze whether S.B. 1001 would survive a hypothetical First Amendment challenge, this Part will explore three broad questions: First, should the conduct regulated by S.B. 1001 receive First Amendment coverage? Next, if the regulated expression is covered, what kind of scrutiny should the regulation receive? Finally, would S.B. 1001 survive strict scrutiny?
A. Should Conduct Regulated by S.B. 1001 Receive First Amendment Coverage?
When speech-related laws are challenged, courts must first determine whether the regulated expression actually triggers a First Amendment analysis.[66] As mentioned above, S.B. 1001 regulates two types of expression: a bot communicating with a human and the bot disclosing its bot-ness. Setting aside S.B. 1001’s disclosure requirement, the regulated conduct alone, “us[ing] a bot to communicate or interact with another person in California online,”[67] will likely trigger First Amendment coverage. Although U.S. courts have interpreted the First Amendment to cover emerging modes of communication,[68] the Supreme Court has hesitated to create wholly new categories of protected speech,[69] choosing instead to draw simple analogies between new modes and traditional protected speech.[70] Experts in the growing field of AI law argue that determining First Amendment coverage for machine-generated speech may require more than a simple analogy;[71] this Section adapts Tim Wu’s coverage framework for machine speech to determine whether the conduct regulated by S.B. 1001 should be covered under a First Amendment analysis: We must ask, first, whether the right is claimed by a “person;” next, whether the regulated expression is considered “speech;” and finally, whether the regulated expression categorically triggers a First Amendment analysis.[72]
1. Personhood
Though free speech theory has now incorporated more than just individual persons,[73] there are limits to who can claim a free speech right.[74] Unlike strong AI personhood, which may require rethinking about how autonomy functions within free speech,[75] bot personhood more cleanly comports with established free speech jurisprudence; the Supreme Court has traditionally downplayed strict personhood[76] and recognized protections for speech through technological vessels.[77] Ignoring the possibility of fully autonomous bots, when a bot engages a human, the bot’s “behavior” is a direct result of a programmer’s code,[78] thus, courts are more likely to find the programmer to be the true “speaker” instead of the machine itself.[79]
However, the fact that S.B. 1001 targets only bot “users” complicates the personhood analysis. The conduct regulated by S.B. 1001 includes two separate expressions, meaning the law could potentially infringe on two separate speech interests, including interests not directly tied to the user. Determining who has an interest in each or both types of expression can get confusing. For example, programmers may have a free speech interest in both the code necessary for the bot to interact with humans and the required disclosure if the bot is programmed to disclose itself; on the other hand, a nonprogrammer bot user may have a free speech interest in any expression that directs the bot[80] or any incidental text included in a bot profile to meet the disclosure requirement. And what happens if the user is the programmer or if there are multiple users and multiple programmers? Tracing back each expression to its source may be necessary before an individual can make a First Amendment claim.
Another issue of personhood may arise in a future S.B. 1001 challenge if, for example, a party responding to a bot enforcement action challenges the validity of the law under the First Amendment and also claims to not be liable for the semi-autonomous actions of its computer program. If a programmer or user maintains even a scintilla of control over the actions of a bot, are those actions not still in some way the expression of the programmer? Until it is possible to credibly argue that speech can be completely generated by a machine, the bot programmer/user will be able to claim personhood for the purposes of triggering constitutional protection. Although, if a First Amendment challenge of S.B. 1001 requires a party to admit to some amount of direct control over a bot, that party may risk proving its own unlawful intent to use the bot.
2. Is Bot Expression Speech?
Some of S.B. 1001’s disclosure options (textual disclosure by the user in particular) should simply be treated as traditional textual speech, but a bot’s machine-generated communication poses interesting new questions for courts. Speech, for the purposes of the First Amendment, can be defined more broadly than simple “spoken or written word.”[81] But, because not all communication is speech,[82] the Supreme Court has developed ways of determining which communications qualify as speech.[83] While some relatively new forms of communication, such as videogames, are assumed to be speech because of their similarity to traditional speech,[84] forms of speech that are not easily compared to traditional forms may be analyzed with the Spence Test, which examines whether “[a]n intent to convey a particularized message was present, and in the surrounding circumstances the likelihood was great that the message would be understood by those who viewed it.”[85] The Spence Test may be necessary to analyze the kinds of expression covered by S.B. 1001; for example, a bot need only “interact” with a California resident online to trigger the law.[86] Does a friend request by a bot on Facebook convey a “particularized message?”[87] Would a person understand a friend request from a profile to mean that the profile is operated by a person? Maybe.
On a more foundational level, the output generated by a bot is the direct result of programming, and software code is physically the same as traditional written expression.[88] As such, courts may consider bot code, like encryption source code,[89] to be speech simply because it is a fixed medium of expression.[90] Though, since “there is independence between source code and program behavior”[91] and because a bot program’s substantive output (a Twitter post, a chat reply, a comment in a forum) is not always a direct result of the code, bot code may need to be characterized as a hybrid of traditional, functional, and symbolic speech.[92] Section III.B.2 will consider how communication generated through the “black box” of bots is similar to using cash as “speech-by-proxy.” Depending on the complexity of a particular piece of bot software, it seems likely that a court would consider bot expression to be the speech of whoever has input parameters necessary to generate that expression.
3. Categorical Inclusion/Exclusion
Another way to analyze First Amendment coverage is to look at categories of speech that the Supreme Court has found to fall within or outside of free speech scrutiny.[93]
Otherwise Unlawful Speech. The First Amendment does not provide absolute protection for all expression; certain categories of speech are considered unprotected.[94] Perhaps the simplest way for a bot law to avoid a constitutional challenge would be to specifically target otherwise unlawful speech that courts have recognized as regulatable, such as “true threats”[95] or fraudulent speech.[96] Of the general ways that S.B. 1001 may chill free speech mentioned above,[97] the regulation of deceptive influence on commercial transactions likely falls into a categorical exclusion of free speech coverage. For example, S.B. 1001 uses “intent to mislead”[98] as a qualifier for the targeted bot speech; this both combats possible overbreadth arguments[99] and attempts to frame the speech as unlawful and unprotected in the first place. Though the inclusion of an intent element does not magically repel First Amendment scrutiny,[100] it may persuade a reviewing court that the regulated behavior is not worthy of protection.[101] Even if a court considers a regulation of deceptive commercial speech, for example, ripe for First Amendment analysis, the targeting of such unlawful speech will help prove a government’s interest in promulgating the law.[102] Unless a state seeks to only touch on the issue of consumer deception[103] or hate speech,[104] the inclusion of political purposes will certainly sink any claim of categorical exclusion for otherwise unlawful speech. For this reason, S.B. 1001, which includes a political prong, will not avoid a free speech challenge by categorical exclusion alone.
Functionality. Within the scope of symbolic conduct, the Supreme Court explicitly distinguishes between expressive communication and functional communication.[105] In the extreme, conduct that is all function and no expression will not receive First Amendment coverage,[106] but where should the functionality line be drawn, and can an argument be made that programming a bot is more functional than expressive?[107] Some commentators suggest that certain types of functionality may be dispositive in triggering First Amendment analysis.[108] For example, Tim Wu offers two ways to analyze the relationship between programmers and machine speech as functional expression: conduits/carriers and communication tools.[109]
First, conduits/carriers “handle or transform” information without directly controlling the content of the expression.[110] For example, television broadcast networks have “editorial discretion” over the content of the broadcast—this triggers First Amendment coverage,[111] but cellular networks only provide a conduit for expression to move within the service—this does not trigger First Amendment coverage.[112] Depending on how a bot is programmed and to what extent the bot operates autonomously, an argument that a bot is simply broadcasting with “editorial discretion” may be credible. For example, if a social media bot violates S.B. 1001, but only expresses itself through the content generated by social media users other than the bot’s programmer/user, that programmer/user could claim that the bot is merely broadcasting the views of others. Furthermore, any control that the programmer/user has to shape the tone of the broadcasted content is within the “editorial discretion” of a network operator. On the other hand, a “noneditorial” common carrier argument would only be possible for bots with either complete autonomy or a large number of programmers/users.
The second way to analyze functionality in bot expression is to characterize it as a mere communicative tool.[113] The Ninth Circuit has declined to apply First Amendment analysis to maps and navigational charts,[114] and at least one other court has avoided the constitutional question of whether Google Maps directions could be considered protected speech.[115] Bots, in their most common form, are essentially tools for extending human access to retrievable information,[116] so it is plausible to frame the programming of a bot, not as expression, but as the design of a tool that functionally provides information to users.[117] For example, if a bot user were to unlawfully use a telemedicine helper bot to obtain medical data from human users, California could argue that the intentional deception falls under S.B. 1001, while the bot expression was intended to be functional in its capacity to elicit responses from the human user. However, because S.B. 1001 was written in such a way to avoid sweeping up bot users who have no intention to deceive, California would only be able to use a functionality argument against a small sliver of bots—those that are both functional and misleadingly human.
Other commentators suggest that because software source/object code “yields both a highly functional result and some measure of fixed expression,”[118] similar to the burning of a draft card,[119] the existence of functional elements should not preclude the finding of a speech interest. After all, the presence of a “language” within a bot’s programming ensures that a court will consider its final expression partially based on fixed speech[120] and minimally expressive.[121] If S.B. 1001 is challenged by a bot that in some way aggregates human expression or collects data under the guise of a functional service, California could be successful in avoiding First Amendment coverage.
B. Does S.B. 1001 Touch Protected Speech?
Assuming California is unsuccessful at classifying the conduct regulated by S.B. 1001 as outside of First Amendment coverage, the next question is whether S.B. 1001 touches protected speech—the answer will determine the level of scrutiny the law should receive.[122] Beginning with United States v. Carolene Products Co.,[123] “strict scrutiny,”[124] the Court’s highest level of scrutiny, has come to symbolize the “jurisprudential distinction between ordinary rights” and those “liberties entitled to more stringent judicial protection.”[125] Thus, it is rare that a law restricting free speech, a fundamental American right, survives this scrutiny.[126] To survive strict scrutiny, a state must both carefully draft its legislation and make strategic arguments before the court.[127] In our hypothetical challenge, the level of scrutiny applied by the court may be based on whether S.B. 1001 is found to regulate content-based speech, political speech, or compelled speech.
1. Content-Based Speech
S.B. 1001 provides boundaries for how bots interact with humans, but some may argue that it actually controls the content of bot expression. When legislation touches speech, courts will “distinguish between content-based and content-neutral regulations of speech.”[128] For example, there is difference between regulating what words can be printed on a sign and where that sign can be located; both regulations touch speech, but only one directly controls the content. On this issue, the U.S. Supreme Court’s First Amendment jurisprudence is clear; content-based regulation of speech is “presumptively invalid.”[129] Because courts judge content-based laws under strict scrutiny[130] but content-neutral laws under a lesser standard,[131] the ultimate fate of legislation can hinge on the ability of a state to portray its legislation as content-neutral.[132] Drawing the distinction between content-based and content-neutral is not always clear,[133] but the essential question is “whether the government has adopted a regulation of speech because of [agreement or] disagreement with the message it conveys.”[134]
California’s strongest argument is that S.B. 1001 targets a particular modality rather than a particular viewpoint.[135] Restricting the ability of bots to pose as humans online, like restricting the use of shooting-target effigies for example, may incidentally regulate certain content commonly associated with the modality, but it arguably does not target the content itself. [136] And unlike in Carey v. Brown, where a Chicago law prohibited picketing in neighborhoods unless related to labor protests outside businesses,[137] there is no specific viewpoint that would allow a bot user to circumvent regulation—only a factual disclosure by the bot or bot user. Opponents may argue that requiring disclosure of an automated account effectively creates a content-based restriction because S.B. 1001 discriminates in favor of bot-human interaction that includes speech about the nature of the bot. But, unlike a nondisclosure rule, a disclosure requirement does not suppress a bot user’s viewpoint.[138]
Using the comparisons to broadcasters discussed in the previous Section, California may also be able to compare S.B. 1001 to legislation requiring cable networks to include local stations. In Turner Broadcasting System v. FCC, the Supreme Court held that the “must-carry” provisions of the Cable Television Consumer Protection and Competition Act of 1992 requiring cable networks to carry local stations were content-neutral and deserving of intermediate scrutiny.[139] On its face, a law that has the effect of interfering with the editorial discretion of a content provider can be considered content-neutral if the interference “does not depend upon the content.”[140] The requirement of S.B. 1001, that a bot not interact with a human with intent to mislead about its artificial nature without disclosure, may interfere with the bot user’s presentation of the bot’s output, but it does not directly discriminate against the viewpoints of the bot user.
The question of whether a regulation is content-neutral is also related to governmental motive. In R.A.V. v. City of Saint Paul, Justice Scalia stated that “[t]he government may not regulate [speech] based on hostility—or favoritism—towards the underlying message expressed.”[141] This statement gives credence to the idea that a governmental motive analysis is a question of both coverage and protection,[142] embracing the “negative view” of the First Amendment, “the need to constrain the government’s potentially dangerous exercise of control over expression.”[143] In a 1996 article, Justice Kagan suggests a helpful definition for impermissible motive: “[T]he government may not restrict expressive activities because it disagrees with or disapproves of the ideas espoused by the speaker; it may not act on the basis of a view of what is a true (or false) belief or a right (or wrong) opinion.”[144]
California may argue that requiring bot users to disclose the use of bots is still content-neutral because it is motivated by a content-neutral purpose.[145] In legislative reports concerning S.B. 1001, lawmakers made a point to distinguish between two possible motives for regulating bots: instead of being motivated by the assumption that “the artificial identity of a bot makes the speech inherently less trustworthy,” California lawmakers believe that “certain communications made by persons through bots are intended to mislead and disclosures of such bot-made or bot-facilitated speech are intended to alert and inform users so that the users can more readily filter out those misleading communications from the nonmisleading variety.”[146] In this way, S.B. 1001 is concerned with the “how” of online communication, not the “what.” Under the law, bots and, by extension, programmers/users may freely express any viewpoint as long as it is clear that a bot is speaking. Yet, the burden of a compelled disclosure is not imposed on bots that express messages unrelated to commercial transactions or politics.
Even if S.B. 1001 may be characterized as discriminatory against certain content, at least some of that content (deceptive commercial speech) has traditionally been considered unprotected under the First Amendment.[147] California has a strong argument that a regulation of deceptive bots, though perhaps a “novel restriction on content,” is part of a “tradition of proscription.”[148] If opponents of the law are persuasive that the disclosure requirement indirectly causes viewpoint discrimination, the state may still have a compelling interest in regulating such content.
2. Political Speech
The next major distinction that could influence whether S.B. 1001 receives strict or lesser scrutiny is the political nature of the restriction.[149] While the commercial harm to be addressed by S.B. 1001—deception as to whether a human is engaging in a commercial transaction—is arguably produced by speech in pursuit of solely unlawful ends,[150] the democratic/political harm—bots posing as humans to influence votes—is produced by speech in pursuit of both lawful and unlawful ends, thus touching at least some protected speech.[151] However, S.B. 1001’s political prong may have a chance of avoiding strict scrutiny since disclosure requirements, even in the context of political speech, have traditionally been viewed as an acceptable way to mitigate the risk of deception in political campaigns.[152]
While opponents of S.B. 1001 may have a powerful argument that restricting political expression through the use of bots will have a chilling effect on opportunities for political advocacy, the Supreme Court, in both U.S. v. Harriss[153] and Citizens United v. FEC,[154] has upheld regulations requiring disclosure, even if the regulation concerns direct or indirect political advocacy.[155] Requiring bots to disclose themselves is similar to requiring a political action committee to disclose itself as the creator of a television ad.[156] Proponents of S.B. 1001 may argue that disclosure of a bot’s autonomous nature is less offensive to First Amendment principles than the disclosure of political donors; S.B. 1001 only requires the puppet to acknowledge its strings, not point at the puppet master. If such a light disclosure is the only thing required of bot users, California may be able to argue that political speech will only be incidentally chilled by requiring disclosure.
Another way of softening the harshness of First Amendment jurisprudence concerning restriction of political speech is to analogize S.B. 1001’s political prong to political action committee (PAC) contributions. In particular, a strong argument can be made that bot programs, with their “black box” qualities,[157] are more akin to financial contribution speech[158] or “speech by proxy,”[159] and less like political expenditures.[160] In past political spending First Amendment cases, the government has attempted to distinguish between limitations on direct spending (direct “speech-by-proxy”), unconstitutional under Buckley,[161] and “limitations on contributions to permanent committees,” proxy speech made to an autonomous committee.[162] The Supreme Court has not provided a clear standard for determining when contribution to an autonomous political group becomes the speech of the group and not the financial donor.[163] Using the view of the Court expressed in Buckley, that political contribution is protectable speech, a programmer’s code is like a financial contribution and the bot’s end expression is like a PAC’s support of a candidate.[164] Similar to how cash, a neutral vehicle for the expression of the spender, can filter through an agent to create speech, a programmer’s ones and zeroes are inherently void of expressive content until the bot acts as the agent.[165] The programming of a bot is more comparable to a political contributor giving funds to a multi-candidate committee, from which the output may be less predictable or controllable by the individual contributor.[166]
3. The Right to Not Speak
To convince the court to use strict scrutiny, especially when concerning the disclosure requirement of S.B. 1001, opponents will likely point to precedent holding that individuals have a right to not speak. For example, in Riley v. National Federation of the Blind,[167] a North Carolina law required that solicitors of charitable donations disclose potential donors before an appeal for funds, “essentially mandat[ing] speech by private persons that they would not otherwise make, thereby altering the content of that speech.”[168] The Supreme Court decided that even though the compelled speech touched commercial speech, it still should receive strict scrutiny.[169] This case in particular gives opponents an effective argument against S.B. 1001’s disclosure requirement, but it is unclear whether courts will accept comparisons between the Bill’s disclosure requirement and the kind of required disclosure in Riley;[170] giving the names of potential donors to a charitable cause is both a mixed purpose speech (not purely commercial) and more onerous than disclosing the artificial identity of a bot. Proponents of S.B. 1001 could easily argue that the disclosure of bot-ness is not content-based, but rather identifying.
C. Would S.B. 1001 Survive Strict Scrutiny?
In our hypothetical challenge, if S.B. 1001 is deemed either a content-based speech restriction or, more specifically, a restriction on protected political speech, the court will apply a strict scrutiny analysis.[171] Under strict scrutiny, only laws narrowly tailored to achieve a compelling governmental interest will be upheld.[172] In other words, a government facing strict scrutiny must be able to show that the targeted harm is serious and that the chosen solution is the least restrictive way to regulate.[173]
1. Compelling Interest
The strength of California’s interest in bot regulation depends on the specificity with which a court chooses to frame the interest. If a court approaches the interest analysis in a broad, surface-level way,[174] California may be able to characterize its interest as generally concerning the integrity of elections and democratic processes.[175] If a court chooses to more specifically define the governmental interest as an interest in preventing bot influence on the democratic process, it will be more important for California to provide evidence of how bots can be used to manufacture consensus and spread false information within the state itself. To California’s credit, the legislative history of S.B. 1001 shows significant consideration by lawmakers to address the real harm of malicious bots.[176] But, convincing the judiciary that unregulated automation technology poses a serious concrete harm may prove to be tricky in a court that is already persuaded to find bot disclosure an infringement of protected speech. This Author believes that the evidence weighs in California’s favor. And after all, those who favor a more hands-on approach to emerging technology law need not solve the digital competency problem in our legal institutions to prove that California has a compelling interest in preventing bot harm.
2. Narrowly Tailored
To meet the narrowly tailored prong of the strict scrutiny analysis, California’s argument of least resistance should involve strong comparisons between the speech regulated by S.B. 1001 and other permissible forms of required disclosure. As mentioned above, the Supreme Court has previously recognized, in both Buckley[177] and Citizens United,[178] that campaign finance disclosure is a relatively “less restrictive”[179] option for preventing the harm of deception and corruption in the political process. S.B. 1001’s aim (prevention of deception) and method (disclosure) have already been conceptually approved at the highest levels of the judiciary.
Changes made before the final passage of S.B. 1001 alleviated the risk of an overbreadth challenge by narrowing the law to only regulate bots being used with intent to cause the harm California seeks to prevent.[180] A law can be deemed overbroad if it regulates more speech than the Constitution allows;[181] furthermore, a law risks invalidation “if the overbreadth is ‘substantial.’”[182] California lawmakers effectively limited overbreadth concerns in two distinct ways. First, committee-level changes to mens rea terms help to only target those with “intent to mislead” about the artificial personhood of a bot.[183] There is no danger of making unintentional confusion over the nature of a bot into an unlawful offense for the innocent bot user. Second, other committee changes helped to target content that specifically falls under otherwise unlawful behavior.[184]
Overall, lingering questions persist as to whether the specific mechanisms of S.B. 1001 are properly tailored to address the risks associated with unregulated bots. For example, could California act to prevent bot harm that does not include the impinging of speech? Will transparency by bots even prevent human users from being influenced by bot accounts? The fact that these questions are not easily answered may cause problems for California in a strict scrutiny challenge. One thing is for certain; outside of regulating online platforms themselves, compelling bot accounts to disclose their artificial nature is the most direct way to address the problem.
IV. Recommendation
Ultimately, this Author believes that the type of expression regulated by S.B. 1001 will receive a lesser scrutiny and be declared constitutional on its face. Like the categories of defamation and obscenity, automated speech as a category does not contribute such a significant net positive to the marketplace of ideas that its regulation should be considered constitutionally protected.[185] Though, because the application of bot technology, and thus the speech interests of bot users and programmers, will vary depending on the level of bot autonomy, there is a chance that S.B. 1001 may be found unconstitutional as applied to a particular kind of bot use. This final Section discusses some strategies for improving future bot regulation.
A. Expanding S.B. 1001’s Commercial Prong
As this Comment has shown, while California may effectively argue that S.B. 1001 is merely regulating the method of political communication rather than the content, the plain language of the Bill seems to target political speech. Without the political prong, S.B. 1001 would not be nearly as constitutionally beguiling. If it disappeared, would we miss it? Maybe not. S.B. 1001 could be effective at combating the democratic harms of bots without explicitly targeting expression with political purposes. S.B. 1001’s commercial prong targets the deceptive use of automated profiles to “incentivize a purchase or sale of goods or services in a commercial transaction,” but this language does not expressly include a significant way that bot users may deceptively profit from human users: clicks. As addressed in the shocking Wired article about Macedonian fake-news farms,[186] one of the biggest challenges to combating fake news and misinformation is the profitability of using incendiary/false headlines or social posts to drive web traffic and increase ad revenue.[187]
It may be impossible to measure how much of the disinformation economy is motivated by the desire to make a quick buck and how much is created to intentionally damage democratic processes, but it may be almost as difficult to prove a bot user has violated S.B. 1001’s mens rea requirements. By removing the political prong from the Bill, California could target the deceptive use of bots to generate ad dollars (an undeniably content-neutral aim) and consequently prevent bot users from profiting from divisive political content without the need to prove an intent to deceive for the purposes of effecting a vote. Under the current law, what would stop a potentially unlawful bot user from hiding behind their intention to get clicks in the first place? Expanding the commercial prong to include web traffic under the definition of a transaction could both allow lawmakers to drop the political prong and prevent a loophole.
B. Moving Beyond Transparency
While S.B. 1001’s success will be more accurately judged when the law is first implemented and when other laws are created on the subject of bots, in the long term, this Author is not convinced that transparency will be enough to avoid the potential harm of bots in the future, especially without corresponding education about the power and scope of AI-driven technologies. Social networks are already transparent about how they make their money—they sell our online decisions, preferences, and identities to marketers—yet we continue to make choices that have serious privacy-destroying consequences. Transparency, while elegant and useful for lawmakers seeking to make constitutionally-rubberstamped disclosure arguments, puts responsibility on individuals to control aspects of the digital environment that are out of individual control and can only be changed through concerted network-level action.
While S.B. 1001 may not totally solve the problem of unregulated online bots, it is necessary for governments to at least start and act, if only to build public confidence that the issue is not out of control. If left to the social networks and internet service providers, the regulation of bots will not happen until it is directly profitable or until their ubiquity is an existential threat—a time that may never come for some of the larger tech firms. Because S.B. 1001 only touches the tech companies if they use their own bots to intentionally mislead California citizens, the law (disappointingly) creates no accountability on the part of the networks that facilitate the use of bots. In order to more effectively prevent the types of harm at the core of S.B. 1001, a future bot law must not only go after users, but must also go after social networks for not taking steps to prevent the behavior themselves. This would certainly be difficult to achieve on a state-by-state basis.
S.B. 1001 may serve as a limited first strike in a nationwide political battle over the unchecked power of technology. If the mood in Washington concerning social networks sours during the 2020 campaign,[188] and if S.B. 1001 can be shown to have some impact, it would not be surprising to see something like 2018’s Senate-generated Bot Disclosure and Accountability Act get passed. On a more ambitious level, the federal government could create an agency to oversee internet communities and their impact on our social wellbeing, giving it the power to promulgate rules for websites that allow for the use of bots. If a federal agency were given a broad mandate by Congress to regulate issues of informational legitimacy and privacy among the major online platforms, that agency could reasonably justify a conclusion that the marketplace of ideas does not benefit from a share-based media system that favors parties with the resources to manipulate the system.[189] The harms associated with retreating internet privacy and increasing corporate control of our personal data are intertwined with the issue of computers being used to manipulate public opinion; ultimately, a comprehensive federal agency would have the competence and nimbleness to handle smaller policy issues as they pop up across the country.
Matthew Hines
John Mulaney: Kid Gorgeous at Radio City (Netflix 2018) (“Prove to me you’re not a robot! Look at these curvy letters. Much curvier than most letters, wouldn’t you say? No robot could ever read these.”).
See, e.g., Jeffrey Gottfried & Elisa Shearer, Americans’ Online News Use Is Closing in on TV News Use, Pew Res. Ctr. (Sept. 7, 2017), https://www.pewresearch.org/fact-tank/2017/09/07/americans-online-news-use-vs-tv-news-use/ [https://perma.cc/6CPG-C2MG] (stating that, as of August 2017, “43% of Americans report often getting news online, just 7 percentage points lower than the 50% who often get news on television”); Mirae Yang, The Collision of Social Media and Social Unrest: Why Shutting Down Social Media Is the Wrong Response, 11 Nw. J. Tech. & Intell. Prop. 707, 708 (2013) (“[S]ocial media has transformed the traditional relationship between government authority and its citizens by providing the people with an innovative and powerful means to harmonize their efforts in expressing their political and social concerns.”). The judicial concept of a “free trade in ideas” originates in Justice Holmes’s dissent in Abrams v. United States. Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting).
See Elisa Shearer & Katerina Eva Matsa, Pew Research Ctr., News Use Across Social Media Platforms 2018, at 2 (2018), https://www.journalism.org/wp-content/uploads/sites/8/2018/09/PJ_2018.09.10_social-media-news_FINAL.pdf [https://perma.cc/AW58-DJ9V] (stating that about two-thirds of Americans get some news from social media, though many are skeptical of its accuracy); Paul Vigna & Alexander Osipovich, Bots Are Manipulating Price of Bitcoin in ‘Wild West of Crypto,’ Wall St. J. (Oct. 2, 2018, 8:00 AM), https://www.wsj.com/articles/the-bots-manipulating-bitcoins-price-1538481600; Samuel C. Woolley & Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online 2, 21 (Computational Propaganda Research Project, Working Paper No. 2017.5, 2017) (comparing the botnet size of both Clinton and Trump during the 2016 election).
S. 1001, 2017–2018 Leg., Reg. Sess. (Cal. 2018) (codified in Cal. Bus. & Prof. Code § 17941(a) (West 2018)) (“It shall be unlawful for any person to use a bot to communicate or interact with another person in California online. . . . A person using a bot shall not be liable under this section if the person discloses that it is a bot.”).
The term bot, meaning “a computer program that performs automatic repetitive tasks,” is shortened from the word robot. Bot, Merriam-Webster, https://www.merriam-webster.com/dictionary/bot [https://perma.cc/4PP2-CXLJ].
See Ava Chisling, Bots vs Chatbots vs Robots vs AI, Ross Intelligence, (Nov. 6, 2016), https://rossintelligence.com/bots-vs-chatbots-vs-robots-vs-ai/ [https://perma.cc/3JT8-AM5J]. Ironically, while pop culture stokes fears of computers masquerading as humans in the physical world, we increasingly choose to live our lives online where computers can more easily mimic humans.
Vimal Maheedharan, A Detailed Overview of Web Crawlers, Cabot (Nov. 11, 2016), https://www.cabotsolutions.com/2016/11/a-detailed-overview-of-web-crawlers [https://perma.cc/H67H-SMBD].
See Shannon Liao, Most Americans Say They Can’t Tell the Difference Between a Social Media Bot and a Human, Verge (Oct. 15, 2016, 4:32 PM), https://www.theverge.com/2018/10/15/17980026/social-media-bot-human-difference-ai-study [https://perma.cc/8XE4-89NG].
See Brian Higgins, Thanks to Bots, Transparency Emerges as Lawmakers’ Choice for Regulating Algorithmic Harm, Artificial Intelligence Tech. & L. (Oct. 21, 2018), http://aitechnologylaw.com/2018/10/thanks-to-bots-transparency-emerges-as-lawmakers-choice-for-regulating-algorithmic-harm/ [https://perma.cc/B7F9-6KHB].
Id.
Id.
Lauren Kunze, On Chatbots, TechCrunch (Feb. 16, 2016, 3:00 PM), https://techcrunch.com/2016/02/16/on-chatbots/ [https://perma.cc/MJ5U-EG44].
Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation 2–3 (1976).
Kyle Vanhemert, The Future of UI Design? Old-School Text Messages, Wired (June 26, 2015, 7:00 AM), https://www.wired.com/2015/06/future-ui-design-old-school-text-messages/ [https://perma.cc/J4BE-LWWV].
Cal. Bus. & Prof. Code § 17940(a) (West 2018).
See Understanding Your Online Identity: An Overview of Identity, Internet Soc’y, https://www.internetsociety.org/wp-content/uploads/2017/11/Understanding-your-Online-Identity-An-Overview-of-Identity.pdf [https://perma.cc/8M9B-PTPY] (last visited Oct. 31, 2019).
See Danah M. Boyd & Nicole B. Ellison, Social Network Sites: Definition, History, and Scholarship, 13 J. Computer-Mediated Comm. 210, 211–13 (2007) (“Profiles are unique pages where one can ‘type oneself into being.’”).
Cal. Bus. & Prof. Code § 17940(a).
Tobias Knecht, A Brief History of Bots and How They’ve Shaped the Internet Today, Abusix (Sept. 26, 2016), https://www.abusix.com/blog/a-brief-history-of-bots-and-how-theyve-shaped-the-internet-today [https://perma.cc/EL9W-JD9P] (“The first bots used on IRC were Jyrki Alakuijala’s Puppe, Greg Lindahl’s Game Manager (for the Hunt the Wumpus game) and Bill Wisner’s Bartender.”).
Jon Swaine, Twitter Admits Far More Russian Bots Posted on Election Than It Had Disclosed, Guardian (Jan. 19, 2018, 7:46 PM), https://www.theguardian.com/technology/2018/jan/19/twitter-admits-far-more-russian-bots-posted-on-election-than-it-had-disclosed [https://perma.cc/P3BL-G3CG].
Tim Wu, Please Prove You’re Not a Robot, N.Y. Times (July 15, 2017), https://www.nytimes.com/2017/07/15/opinion/sunday/please-prove-youre-not-a-robot.html [https://perma.cc/357D-3FJG].
Elisabeth Eaves, The California Lawmaker Who Wants to Call a Bot a Bot, Bull. Atomic Scientists (Aug. 23, 2018), https://thebulletin.org/2018/08/the-california-lawmaker-who-wants-to-call-a-bot-a-bot/ [https://perma.cc/54QU-FALN] (“The difference between a single individual attempting to stir controversy and what a bot can accomplish is one of scale . . . .”).
Thomas E. Beach, Computer Concepts and Terminology, U.N.M. Los Alamos, https://www.unm.edu/~tbeach/terms/inputoutput.html (last updated Aug. 29, 2016) [https://perma.cc/59KJ-M6EX].
See Higgins, supra note 9.
Some in cryptocurrency markets believe that the only way to combat bots is to use your own. See Vigna & Osipovich, supra note 3.
See Higgins, supra note 9 (“Bots that use complex human behavioral data to identify and influence or manipulate people’s attitudes or behavior (such as clicking on advertisements) often use the latest AI tech.”).
Hyun Soon Park, Case Study: Public Consensus Building on the Internet, 5 CyberPsychol. & Behav. 233, 237 (2002) (arguing that the instantaneousness of electronic messaging is useful to “galvanize interest, and motivate and retain participation among potential adherents”).
See, e.g., Sharad Goel et al., The Structural Virality of Online Diffusion, 62 Mgmt. Sci. 180, 180 (2016).
Bot Disclosure and Accountability Act of 2018, S. 3127, 115th Cong. §§ 2–3 (2018).
Wu supra note 21.
Id.
Id. In 2017, an online group claimed to have used bots to influence the fan rating for Star Wars: The Last Jedi, though Rotten Tomatoes has pushed back against the claims. Todd Spangler, Rotten Tomatoes Dismisses Claim ‘Star Wars: The Last Jedi’ User Ratings Were Skewed by Bots, Variety (Dec. 21, 2017), https://variety.com/2017/digital/news/star-wars-last-jedi-rotten-tomatoes-user-ratings-bots-skewed-1202647473/ [https://perma.cc/XP25-BJSL].
See Wu, supra note 21 (“In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as ‘small’ donors. And actual voting is another obvious target—perhaps the ultimate target.”).
See Eaves, supra note 22.
Id.
Id.; see also Nicholas Confessore et al., The Follower Factory, N.Y. Times (Jan. 27, 2018), https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html [https://perma.cc/R2LC-6NS9] (“Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, [Devumi, a follower-selling company] has provided customers with more than 200 million Twitter followers, a New York Times investigation found.”).
See Kurt Wagner, Instagram Is Cracking Down on Services That Sell ‘Likes’ and Followers, Vox (Nov. 19, 2018), https://www.vox.com/2018/11/19/18102841/instagram-buy-likes-followers-crack-down [https://perma.cc/E438-LLRD].
See id. This could also be a problem for companies that are already public. For example, a 2017 study found that up to 15% of Twitter’s user base, a number that may affect its stock price, are bots. Michael Newberg, As Many as 48 Million Twitter Accounts Aren’t People, Says Study, CNBC (Mar. 10, 2017), https://www.cnbc.com/2017/03/10/nearly-48-million-twitter-accounts-could-be-bots-says-study.html [https://perma.cc/TZK9-6N45].
Wagner, supra note 37*.* Fake news stories, once propagated into an environment like Facebook, can make $15 per 1,000 impressions. See Samanth Subramanian, Inside the Macedonian Fake-News Complex, Wired (Feb. 15, 2017), https://www.wired.com/2017/02/veles-macedonia-fake-news/ [https://perma.cc/64RW-6MB7].
Consider this analogy: A vehicle (technology) driven at high speeds (amplified human power) can injure pedestrians and young children walking home from school may not recognize the danger of a busy street (external factor); so, we create school zones and crosswalks to mitigate the danger.
According to Pew Research Center, 68% of American adults reported using Facebook in 2018. Aaron Smith & Monica Anderson, Pew Research Ctr., Social Media Use in 2018, at 2 (2018), https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2018/02/PI_2018.03.01_Social-Media_FINAL.pdf [https://perma.cc/P99J-3PEQ].
The emergence of the “cookie” changed the way advertisers track users. See, e.g., Joseph Turow, The Daily You: How the New Advertising Industry is Defining Your Identity and Your Worth 75, 80 (2011).
See David Auerbach, You Are What You Click: On Microtargeting, Nation (Feb. 13, 2013), https://www.thenation.com/article/you-are-what-you-click-microtargeting/ [https://perma.cc/984J-Y4VC].
See Mark Warner, Potential Policy Proposals for Regulation of Social Media and Technology Firms (Draft White Paper), https://graphics.axios.com/pdf/PlatformPolicyPaper.pdf [https://perma.cc/CK7K-DNDR]) (“User data is increasingly the single most important economic input in information markets, allowing for more targeted and relevant advertisements, facilitating refinement of services to make them more engaging and efficient, and providing the basis for any machine-learning algorithms . . . .”).
S. 1001, 2017–2018 Leg., Reg. Sess. (Cal. 2018) (codified in Cal. Bus. & Prof. Code §§ 17940–17943 (West 2018)).
Noam Cohen, Will California’s New Bot Law Strengthen Democracy?, New Yorker (July 2, 2019), https://www.newyorker.com/tech/annals-of-technology/will-californias-new-bot-law-strengthen-democracy [https://perma.cc/U32K-A2M2].
Cal. Bus. & Prof. Code §§ 17941(a).
The Bill’s author, California State Senator Robert Hertzberg, says “[t]he jurisdiction of this [B]ill only extends to the borders of California,” though he also acknowledges the power that California has to influence tech companies within its jurisdiction as well as other states that may seek to follow California’s lead. Eaves, supra note 22.
Am. Libraries Ass’n v. Pataki, 969 F. Supp. 160, 173 (S.D.N.Y. 1997).
U.S. Const. art. I, § 8, cl. 3.
See, e.g., Pataki, 969 F. Supp. at 163, 168 (concerning a law that regulates the distribution of sexually explicit material); Ferguson v. Friendfinders, Inc., 115 Cal. Rprt. 2d 258 (Cal. Ct. App. 2002) (concerning an antispam law).
James Pethokoukis, Should Big Tech Be Held More Liable for the Content on Their Platforms? An AEIdeas Online Symposium, Am. Enter. Inst. (Mar. 20, 2018), http://www.aei.org/publication/should-big-tech-be-held-more-liable-an-aeideas-online-symposium/. Also, due to provisions within the Communications Decency Act, social networks and other online service providers have traditionally been shielded from the types of liability that arise when users participate in unlawful conduct against other users. Communications Decency Act, 47 U.S.C. § 230(c)(1) (2012) (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”); see Herrick v. Grindr, LLC, 306 F. Supp. 3d 579, 588 (S.D.N.Y. 2018); FTC v. LeadClick Media, LLC, 838 F.3d 158, 173 (2d Cir. 2016) (quoting Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 19 (1st Cir. 2016)).
Julie L. Henn, Targeting Transnational Internet Content Regulation, 21 B.U. Int’l L.J. 157, 174–75 (2003) (espousing and building upon Professor Michael Geist’s principle that jurisdiction is proper when domestic users are targeted for e-commerce).
Cal. Bus. & Prof. Code § 17942(c) (West 2018) (stating “[t]his chapter does not impose a duty on service providers of online platforms, including, but not limited to, [w]eb hosting and [i]nternet service providers”).
Id.
See Higgins, supra note 9.
Id.
Id.
Paul Ohm, Forthright Code, 56 Hous. L. Rev. 471, 473 (2018) (discussing the large ecosystem of companies that deal in our personal data).
Id. at 480, 486 (“A forthrightness mandate requires companies to be honest, direct, and candid. . . . Forthrightness would impose an affirmative obligation to warn a consumer about conditions that likely matter to the consumer.”).
Id. at 480, 483 (citing Jack M. Balkin, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev. 1183, 1209 (2016)).
The First Amendment has been incorporated by the states under the Fourteenth Amendment’s Due Process Clause. U.S. Const., amends. I, XIV; see Gitlow v. New York, 268 U.S. 652, 666 (1925); Hearing on S. 1001 Before Assemb. Comm. on Arts, Entm’t, Sports, Tourism and Internet Media, 2017–2018 Leg., Reg. Sess. (Cal. 2018).
See Eaves, supra note 22.
See Cal. Bus. & Prof. Code §§ 17941(a).
Courts have considered issues related to bots, including ticket bots and gaming bots. See, e.g., MDY Indus., LLC v. Blizzard Entm’t West, Inc., 629 F.3d 928, 935–41 (9th Cir. 2010); Ticketmaster L.L.C. v. Prestige Entm’t West, Inc., 315 F. Supp. 3d 1147, 1154–56 (C.D. Cal. 2018).
United States v. Stevens, 559 U.S. 460, 464, 467–73 (2010) (evaluating whether depictions of animal cruelty trigger a First Amendment analysis); see Tim Wu, Machine Speech, 161 U. Pa. L. Rev. 1495, 1500 (2013) (using the terms “covered” and “triggered” interchangeably).
Cal. Bus. & Prof. Code § 17940(a) (West 2018).
See, e.g., Brown v. Entm’t Merchs. Ass’n, 564 U.S. 786, 790 (2011) (holding that video games are speech covered by the First Amendment).
Stuart Minor Benjamin, Algorithms and Speech, 161 U. Pa. L. Rev. 1445, 1457 (2013); see Brown, 564 U.S. at 790; Stevens, 559 U.S. at 468–69 (“Maybe there are some categories of speech that have been historically unprotected, but have not yet been specifically identified or discussed as such in our [caselaw]. But if so, there is no evidence that ‘depictions of animal cruelty’ is among them.”).
Brown, 564 U.S. at 790 (“Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world).”).
See Toni M. Massaro & Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Nw. U. L. Rev. 1169, 1172 (2016) [hereinafter Siri-ously 1.0] (“At some point, one might imagine such computer speakers may be disconnected enough and smart enough to say that the speech they produce is theirs, not ours . . . .”); Toni M. Massaro et al., Siri-ously 2.0: What Artificial Intelligence Reveals About the First Amendment, 101 Minn. L. Rev. 2481, 2516 (2017) (“Courts will likely bring the constitutional hammer down differently on some AI informational products than on others, based on the type of information product, context, and the nature of the harms at stake.”).
Wu, supra note 66.
Siri-ously 1.0, supra note 71, at 1179–80 (“Legal persons thus already include not only individuals, but also corporations, unions, municipalities, and even ships, though the law makes adjustments based on their material differences from humans.”).
Wu, supra note 66, at 1500–01 (citing Miles v. City Council, 710 F.2d 1542 (11th Cir. 1983)) (showing that when the court considered whether Blackie, a “speaking” cat, has a free speech right, it ruled that “although Blackie arguably possesses a very unusual ability, he cannot be considered a “person” and is therefore not protected by the Bill of Rights”).
A number of authors have made strong arguments that the expressions of strong AI should receive First Amendment protection. See, e.g., Siri-ously 1.0, supra note 71, at 1178.
See, e.g., First Nat’l Bank of Boston v. Bellotti, 435 U.S. 765, 776 (1978) (stating that “the proper question . . . is not whether corporations ‘have’ First Amendment rights and, if so, whether they are coextensive with those of natural persons. Instead, the question must be whether [the law] abridges expression that the First Amendment was meant to protect”).
Brown v. Entm’t Merchs. Ass’n, 564 U.S. 786, 790 (2011) (stating that freedom of speech does not change when applied to new mediums).
Sarah Mitroff, What Is a Bot? Here’s Everything You Need to Know, CNET (May 5, 2016, 3:23 PM), https://www.cnet.com/how-to/what-is-a-bot/ [https://perma.cc/P9BG-37PL?type=image].
Wu, supra note 66, at 1504 (“Like a book, canvas, or pamphlet, the program is the medium the author uses to communicate his ideas to the world.”).
Though not technically the creator of the software code, a user may have an active role in determining the keywords and parameters for the bot program.
Texas v. Johnson, 491 U.S. 397, 403–04 (1989).
Wu, supra note 66, at 1508 (“A fully inclusive theory of the First Amendment would need to treat as speech forms of communication utterly devoid of ideas or content. Honking horns and shooting firecrackers communicate something, but what exactly?”).
See Spence v. Washington, 418 U.S. 405, 409–11 (1974) (per curiam) (“[T]he nature of appellant’s activity, combined with the factual context and environment in which it was undertaken, lead to the conclusion that he engaged in a form of protected expression”); City of Erie v. Pap’s A.M., 529 U.S. 277, 289 (2000) (“[N]ude dancing of the type at issue here is expressive conduct, although we think that it falls only within the outer ambit of the First Amendment’s protection.”).
See Brown v. Entm’t Merchs. Ass’n, 564 U.S. 786, 790 (2011).
Spence, 418 U.S. at 410–11.
Cal. Bus. & Prof. Code § 17941(a) (West 2018) (“It shall be unlawful for any person to use a bot to communicate or interact with another person in California online. . . .” (emphasis added)). The separate inclusion of the word “interact” seems to expand liability under the law to bot expression that does not fall strictly under “communication.”
Spence, 418 U.S. at 405.
See Steven E. Halpern, Harmonizing the Convergence of Medium, Expression, and Functionality: A Study of the Speech Interest in Computer Software, 14 Harv. J.L. & Tech. 139, 142 (2000) (discussing the relationship between source code, object code, and hardware in regard to speech).
See Junger v. Daley, 209 F.3d 481 (6th Cir. 2000) (holding that source code is protected speech under the First Amendment).
Halpern, supra note 88, at 150 (“[C]omputer object code harbors an inherent speech interest to the degree that it acts as a medium for fixed speech.”).
Id. at 146.
Cf. United States v. O’Brien, 391 U.S. 367, 381–82 (1968) (holding that, even if burning one’s draft card can be considered symbolic speech, the government has a strong interest in regulating the nonspeech elements of such an act).
Simon & Schuster, Inc. v. Members of the N.Y. State Crime Victims Bd., 502 U.S. 105, 127 (1991) (Kennedy, J., concurring) (“[T]he use of these traditional legal categories is preferable to the sort of ad hoc balancing that the Court henceforth must perform in every case if the analysis here used becomes our standard test.”).
See, e.g., Virginia v. Black, 538 U.S. 343, 362–63 (2003) (holding that Virginia’s ban on burning crosses with intent to intimidate did not violate the First Amendment).
Id. at 359.
United States v. Alvarez, 567 U.S. 709, 722–23 (2012) (“Some false speech may be prohibited even if analogous true speech could not be. . . . [That does not mean that] false speech should be in a general category that is presumptively unprotected.”).
See supra Section II.B.2.
Cal. Bus. & Prof. Code § 17941(a) (West 2018).
See infra Section III.C.2.
Ex parte Thompson, 442 S.W.3d 325, 337–38 (Tex. Crim. App. 2014) (citing Texas v. Johnson, 491 U.S. 397, 411 (1989)).
Id. at 338 (“When the intent is to do something that, if accomplished, would be unlawful and outside First Amendment protection . . . such an intent might help to eliminate First Amendment concerns.”).
See State v. Stubbs, 502 S.W.3d 218, 229 (Tex. App.—Houston [14th Dist.] 2016, pet. ref’d) (concluding that, because a Texas online impersonation law could not be described as “only [proscribing] harmful conduct that is unprotected,” a content-neutrality analysis is necessary).
See Jay Van Blaricum, Opinion: Impersonation Bots and Kansas Law, J. Kan. B. Ass’n, May 2018, at 20, 25.
Meg Leta Jones, Silencing Bad Bots: Global, Legal and Political Questions for Mean Machine Communication, 23 Comm. L. & Pol’y 159, 174 (2018).
See Junger v. Daley, 209 F.3d 481, 484 (6th Cir. 2000) (“The Supreme Court has recognized First Amendment protection for symbolic conduct . . . that has both functional and expressive features.” (citing United States v. O’Brien, 391 U.S. 367, 376 (1968))).
See Wu, supra note 66, at 1515 (citing United States v. Rainey, 362 F.3d 733, 734 (11th Cir. 2004)) (“[A] defense that states the arsonist is protected by the First Amendment because he was expressing his hatred for his rival would usually be thrown out.”).
Cf. Halpern, supra note 88, at 150 n.52 (“[T]he direct regulation of speech is treated differently than conduct regulations that have an indirect effect on speech.”).
Wu, supra note 66, at 1520.
Id. at 1520–21.
Id. at 1521.
Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 636 (1994).
The Message in the Medium: The First Amendment on the Information Superhighway, 107 Harv. L. Rev. 1062, 1092 (1994) (“[C]ommon carrier status insulates telephone companies from the expectation that they endorse all speech in phone conversations.”).
Wu, supra note 66, at 1522–23.
See, e.g., Brocklesby v. United States, 767 F.2d 1288, 1295 n.9 (9th Cir. 1985). The Supreme Court denied certiorari to this case, declining to extend the rule nationally. Jeppesen & Co. v. Brocklesby, 474 U.S. 1101 (1986).
Rosenberg v. Harwood, No. 100916536, 2011 WL 3153314, at *2 (Utah Dist. Ct. May 27, 2011).
See Higgins, supra note 9 (“Bots are software programmed to receive percepts from their environment, make decisions based on those percepts, and then take (preferably rational) action in their environment.”).
As a hypothetical, could the government regulate the design of an incredibly power megaphone? What about an advertisement tool that inserted itself into homes?
See Halpern, supra note 88, at 149.
See United States v. O’Brien, 391 U.S. 367, 387 (1968).
See Halpern, supra note 88, at 147 (“There is a speech interest in object code to the degree that the object code acts as a medium for fixed speech.”).
See Junger v. Daley, 209 F.3d 481, 484 (6th Cir. 2000) (“[C]omputer source code, though unintelligible to many, is the preferred method of communication among computer programmers.”).
See Jorge R. Roig, Decoding First Amendment Coverage of Computer Source Code in the Age of Youtube, Facebook, and the Arab Spring, 68 N.Y.U. Ann. Surv. Am. L. 319, 328 (2012) (“If the First Amendment ‘covers’ certain conduct that the government seeks to regulate, ‘the constitutionality of the conduct’s regulation must be determined by reference to First Amendment doctrine and analysis.’”).
United States v. Carolene Prods. Co., 304 U.S. 144, 152 n.4 (1938).
A law that regulates a fundamental right is unconstitutional unless it is narrowly tailored to achieve a compelling governmental interest. See Matthew D. Bunker et al., Strict in Theory, but Feeble in Fact? First Amendment Strict Scrutiny and the Protection of Speech, 16 Comm. L. & Pol’y 349, 353 (2011).
Richard H. Fallon, Jr., Strict Judicial Scrutiny, 54 UCLA L. Rev. 1267, 1285 (2007).
But see Williams-Yulee v. Fla. Bar, 135 S. Ct. 1656, 1665–66, 1673 (2015) (holding that a Florida judicial election ethics rule was narrowly tailored to achieve a compelling governmental interest).
See, e.g., id. at 1668, 1670–71.
Nat’l Inst. of Family & Life Advocates v. Becerra, 138 S. Ct. 2361, 2371 (2018).
Davenport v. Wash. Educ. Ass’n, 551 U.S. 177, 188 (2007); see also R.A.V. v. City of St. Paul, 505 U.S. 377, 382 (1992).
See, e.g., Reed v. Town of Gilbert, 135 S. Ct. 2218, 2227, 2232 (2015) (holding that a town’s sign ordinance violated free speech rights).
See, e.g., Ward v. Rock Against Racism, 491 U.S. 781, 798, 803 (1989) (holding that a municipal sound ordinance that required users of a band shell to use town-provided sound equipment and engineer was content neutral and not unconstitutional).
See, e.g., Reno v. ACLU, 521 U.S. 844, 879, 885 (1997) (holding that portions of the Communications Decency Act are content-based restrictions and violative of the First Amendment).
Reed, 135 S. Ct. at 2227 (acknowledging that regulating speech by its “function or purpose,” though subtle, is still content-based restriction).
Rock Against Racism, 491 U.S. at 791; see State v. Stubbs, 502 S.W.3d 218, 224–25, 231 (Tex. App.—Houston [14th Dist.] 2016, pet. ref’d) (questioning whether regulated speech “can be justified without reference to its content”).
Cf. Gun Owners’ Action League, Inc. v. Swift, 284 F.3d 198, 211 (1st Cir. 2002) (holding that the regulation of shooting targets containing human forms was content-neutral).
See id.
Carey v. Brown, 447 U.S. 455, 457 (1980).
See Doe v. Gonzales, 386 F. Supp. 2d 66, 75 (D. Conn. 2005).
Turner Broad. Sys., Inc. v. FCC, 512 U.S. 622, 652, 661–62 (1994).
Id. at 643–44.
R.A.V. v. City of Saint Paul, 505 U.S. 377, 386 (1992).
Wu, supra note 66, at 1515.
Siri-ously 2.0, supra note 71, at 2491.
Elena Kagan, Private Speech, Public Purpose: The Role of Governmental Motive in First Amendment Doctrine, 63 U. Chi. L. Rev. 413, 428 (1996).
See City of Renton v. Playtime Theatres, Inc., 475 U.S. 41, 47 (1986).
Hearing on S. 1001 Before the Assemb. Comm. on Privacy and Consumer Prot., 2017–2018 Leg., Reg. Sess. 6 (Cal. 2018) (comments regarding a possible avenue to more narrowly tailor the Bill).
See Va. State Bd. of Pharmacy v. Va. Citizens Consumer Council, Inc., 425 U.S. 748, 771–72 (1976) (“Obviously, much commercial speech is not provably false, or even wholly false, but only deceptive or misleading. We foresee no obstacle to a State’s dealing effectively with this problem.”). In a particularly relevant footnote, the Court notes that the “hardiness” of commercial speech may “make it appropriate to require that a commercial message appear in such a form, or include such additional information, warnings, and disclaimers, as are necessary to prevent its being deceptive.” Id. at 771 n.24.
United States v. Alvarez, 567 U.S. 709, 722 (2012) (citing Brown v. Entm’t Merchs. Assn., 564 U.S. 786, 792 (2011)).
“Laws that burden political speech are ‘subject to strict scrutiny.’” Citizens United v. FEC, 558 U.S. 310, 340 (2010) (quoting FEC v. Wis. Right to Life, Inc., 551 U.S. 449, 464 (2007)).
Cf. Alvarez, 567 U.S. at 722 (noting that the regulation of a “false statement made at any time, in any place, to any person” is too broad).
Cf. State v. Stubbs, 502 S.W.3d 218, 229 (Tex. App.—Houston [14th Dist.] 2016, pet. ref’d) (concluding that, because a Texas online impersonation law could not be described as “only proscrib[ing] harmful conduct that is unprotected,” a content-neutrality analysis is necessary).
The requirement that political television ads include a disclosure has been applied to online media. See James E. Davis & Gardner Pate, Political Campaigns and the Internet: Best Practices for Compliance with Analog Regulations in a Digital World, Advoc., Fall 2010, at 63, 65 (reminding readers that political advertisement disclosure requirements also apply to online videos).
United States v. Harriss, 347 U.S. 612, 625 (1954) (holding that Congress is not “constitutionally forbidden to require the disclosure of lobbying activities”).
Citizens United, 558 U.S. at 369 (rejecting the “contention that the disclosure requirements must be limited to speech that is the functional equivalent of express advocacy”).
Id.
California already requires political television ads to include disclosures about who pays for the ad. Campaign Advertising - Requirements & Restrictions, Cal. Fair Pol. Pracs. Comm’n, http://www.fppc.ca.gov/learn/campaign-rules/campaign-advertising-requirements-restrictions.html [https://perma.cc/6CK9-B5UA] (last visited Nov. 14, 2019).
See Higgins, supra note 9 (“While intuition can be used to infer what happens, secrets inside a black box often remain secret.”).
See Nixon v. Shrink Mo. Gov’t PAC, 528 U.S. 377, 415 (2000) (“The decision of individuals to speak through contributions rather than through independent expenditures is entirely reasonable.”).
See Cal. Med. Ass’n v. FEC, 453 U.S. 182, 196–97 (1981) (distinguishing contributions to semi-autonomous political committees from the direct Buckley contributions, stating that “[i]f the First Amendment rights of a contributor are not infringed by limitations on the amount he may contribute to a campaign organization which advocates . . . [for] a particular candidate, the rights of a contributor are similarly not impaired by limits on the amount he may give to a multicandidate political committee . . . which advocates the views and candidacies of a number of candidates”).
FEC v. Nat’l Conservative PAC, 470 U.S. 480, 501 (1985) (holding that the Presidential Election Campaign Fund Act unconstitutionally regulated expenditure limits by political action committees).
Buckley v. Valeo, 424 U.S. 1, 143–44 (1976).
John C. Eastman, Strictly Scrutinizing Campaign Finance Restrictions (and the Courts That Judge Them), 50 Cath. U. L. Rev. 13, 41 (2000) (citing Ky. Right to Life, Inc. v. Terry, 108 F.3d 637, 649 (6th Cir. 1997)).
Cf. Nixon, 528 U.S. at 910 (Stevens, J., concurring) (noting that, while “[t]he right to use one’s own money to hire gladiators, or to fund ‘speech by proxy,’ certainly merits significant constitutional protection . . . . property rights, however, are not entitled to the same protection as the right to say what one pleases”). Compare Cal. Med. Ass’n, 453 U.S. at 196 (plurality opinion) (classifying some proxy speech as unprotected speech), with Nat’l Conservative PAC, 470 U.S. at 494 (arguing that speech is not proxy speech just because contributors do not control the message of a PAC).
Monica Youn, First Amendment Fault Lines and the Citizens United Decision, 5 Harv. L. & Pol’y Rev. 135, 142–43 (2011).
See Higgins, supra note 9.
See Cal. Med. Ass’n, 453 U.S. at 196.
Riley v. Nat’l Fed’n of the Blind of N.C., Inc., 487 U.S. 781, 796 (1988).
R. George Wright, Free Speech and the Mandated Disclosure of Information, 25 U. Rich. L. Rev. 475, 477–78 (1991).
Riley, 487 U.S. at 796.
Id. at 795–96.
See United States v. Alvarez, 567 U.S. 709, 722, 724 (2012).
Adarand Constructors, Inc. v. Pena, 515 U.S. 200, 227 (1995); Wygant v. Jackson Bd. of Educ., 476 U.S. 267, 280 (1986) (“Under strict scrutiny the means chosen to accomplish the State’s asserted purpose must be specifically and narrowly framed to accomplish that purpose.”).
For example, in United States v. Alvarez, the Court held that a federal law criminalizing a false claim of receiving military honors, a content-based restriction of speech, did not meet strict scrutiny because the government could neither prove real harm from these false claims nor show that the law was more effective than less restrictive options. Alvarez, 567 U.S. at 722, 724–26, 729.
Cf. Stephen E. Gottlieb, Compelling Governmental Interests: An Essential but Unanalyzed Term in Constitutional Adjudication, 68 B.U. L. Rev. 917, 932, 935–37 (1988) (“[W]ith few exceptions, the Court has failed to explain the basis for finding and deferring to compelling governmental interests.”).
See Hearing on S. 1001 Before the Assemb. Comm. on Privacy and Consumer Prot., 2017–2018 Leg., Reg. Sess. 4 (Cal. 2018).
See id.
Buckley v. Valeo, 424 U.S. 1, 67 (1976) (“[D]isclosure requirements deter actual corruption . . . .”).
Citizens United v. FEC, 558 U.S. 310, 322 (2010).
Id. at 369.
Id. at 369–70.
See Schad v. Borough of Mt. Ephraim, 452 U.S. 61, 66–70, 76 (1981) (holding that a prohibition on live entertainment unconstitutionally outlawed more than just the targeted nude dancing establishments).
Bd. of Airport Comm’rs v. Jews for Jesus, Inc., 482 U.S. 569, 574 (1987).
Hearing on S.B. 1001 Before the Assemb. Comm. On Privacy and Consumer Prot., 2017–2018 Leg., Reg. Sess. 1 (Cal. 2018).
Id. at 1–2.
See Davenport v. Wash. Educ. Ass’n, 551 U.S. 177, 188 (2007).
See Subramanian, supra note 39.
Janna Anderson & Lee Rainie, Pew Research Ctr., The Future of Truth and Misinformation Online 11 (2017), https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2017/10/PI_2017.10.19_Future-of-Truth-and-Misinformation_FINAL.pdf [https://perma.cc/WAK7-JMXU] (“[On the subject of combating misinformation,] Alex ‘Sandy’ Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum, commented, ‘We know how to dramatically improve the situation, based on studies of political and similar predictions. What we don’t know is how to make it a thriving business. The current [information] models are driven by clickbait, and that is not the foundation of a sustainable economic model.’”).
Elizabeth Warren has made the break-up of large tech companies like Facebook, Amazon, and Google an issue in her 2020 presidential campaign. Jason Abbruzzese, Elizabeth Warren Calls to Break Up Facebook, Google and Amazon, NBC News (Mar. 8, 2019, 8:35 AM), https://www.nbcnews.com/tech/tech-news/elizabeth-warren-calls-break-facebook-google-amazon-n980911 [https://perma.cc/FU9G-5VUU].
Columbia Broad. Sys., Inc. v. Democratic Nat. Comm., 412 U.S. 94, 123 (1973) (“The Commission was justified in concluding that the public interest in providing access to the marketplace of ‘ideas and experiences’ would scarcely be served by a system so heavily weighted in favor of the financially affluent, or those with access to wealth.”).