I. Introduction

Artificial Intelligence (AI) creeps into most aspects of our daily lives: from our phones to our homes, our cars, our workplaces, our hospitals, even our personal lives and our bodies. AI is hidden in plain sight.[1] Consider any of these actual scenarios where an AI entity such as a self-driving car, a drone, a medical tricorder, a stock trading algorithm, or a hiring algorithm causes injury or even death to a human.[2] Who can be sued or prosecuted, or in other words, who can be held legally responsible? Today, we have no clear answer because we lack a comprehensive understanding as to what AI entities are in the eyes of the law.[3] This Article is the first to empirically bring such an understanding that allows us to situate AI entities within our legal system and answer liability questions. I argue that, in order for legislators and courts to resolve how the law can address the challenge of legal responsibility for AI entities, they need to first answer a more fundamental question with both ontological and epistemological angles: the question of legal personhood for AI entities.

Legal personhood attributes legal consequences to an entity’s actions.[4] As AI entities are designed to operate at an increasing distance from their developers and owners, these AI entities seriously challenge traditional legal frameworks for attribution and liability, resulting in potential accountability gaps.[5] Courts so far approach injuries that result from artificial entities by inquiring into the persons who could have reasonably foreseen the consequence of an act and were in a position to prevent it, or into the persons with nefarious intent.[6] So, when an AI entity causes injury to a human, the first response of the legal system may be to try to assign liability to software programmers or hardware manufacturers and owners on the basis of some form of direct or indirect liability. But AI entities present a unique challenge to this process due to their increasing distance from these persons and their inherent characteristics of autonomy, ubiquity, and inexplicability. This means that AI entities can and will act in ways that are neither intended nor foreseeable to designers or users. Their actions can also be the result of input from multiple independent developers, rendering the identification of an entity on which to impose liability and any causal relationship necessary for such liability to attach particularly difficult.[7]

This challenge is the reason why a lead part of the scholarship in law, ethics, and computer science has considered as a solution the possibility of attaching legal responsibility to the direct source of the harm, the AI entity itself, by first granting it legal personhood.[8] At the same time, other scholars have raised compelling concerns about the potential abuse that ascribing legal personhood to AI entities may bring: from AI legal personhood serving as a shield for corporate accountability,[9] to issues of standing and representation in court,[10] and questions of who may be subjected to legal punishment.[11] The common thread underlying these opposing views is legal personhood as a requirement for legal capacity and legal responsibility.[12] Courts today hardly ever discuss questions of legal personhood because its existence is most often presumed. But AI entities present a new challenge to this presumption, and liability for their actions will depend on whether they satisfy conditions of legal personhood.[13]

Can AI entities have legal personhood and the subsequent rights and duties it establishes? This is the fundamental research question of this Article. U.S. law does not provide a cohesive answer as to which entities enjoy legal personhood and under what conditions. Indeed, different jurisdictions in the United States and also internationally recognize and give effect to legal personhood for different entities on disparate legal grounds with more or less clear justifications. But “[t]o determine whether an [AI] entity is a legal person, one must look to the approach a given system takes toward it.”[14] This Article undertakes an empirical study to comprehensively identify and quantify the conditions upon which the U.S. legal system confers legal personhood on artificial entities and applies them to AI entities to assess whether AI entities can be legal persons.

In Part II, this Article reviews the literature concerning AI entities and identifies the inherent characteristics that render them challenging for legal liability frameworks. It then critically engages with the concept of legal personhood and the conditions scholars have posited are relevant in conferring legal personhood on AI entities such as autonomy, intelligence, and awareness. Part III of this Article presents findings from my qualitative content analysis across U.S. courts’ caselaw from 1809 until the present with regards to conditions courts have relied upon to confer legal personhood on artificial entities. Then, this Article demonstrates through statistical analysis the frequency with which courts have cited to these conditions. On the basis of these empirical findings, I make two claims about the problem of legal personhood for AI entities, one descriptive and one prescriptive. First, I argue that the courts’ overall approach to legal personhood has been disparate, and it does not support legal personhood for AI entities. Second, I argue that empirically understanding the legal landscape for legal personhood prevents courts from conferring legal personhood on AI entities and should give legislators pause before doing so.

II. Legal Personhood & Artificial Intelligence

A. Artificial Intelligence, Machine Learning, and the Law

AI lacks a uniform or universal definition. The very first definition was given to AI in the 1950s[15] and suggested that “the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.”[16] Since that time, the definitions used by practitioners and policy makers vary in setting greater or fewer conditions that an entity must meet to be defined as artificially intelligent. Looser definitions include any computerized system that exhibits behavior simulating some level of human thinking we commonly understand as intelligent.[17] More stringent definitions associate AI with more complex manifestations of intelligence, such as solving specific problems or achieving direct goals in certain environments.[18] Nils Nilsson has provided a useful and quite inclusive definition suggesting that “artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”[19] Stuart Russell and Peter Norvig distinguish AI systems between those whose processes are linked to motivations and those linked to behavior: AI that thinks like humans and AI that thinks rationally, on the one hand, and AI that acts like humans or AI that acts rationally on the other.[20] Applications of the former category include neural networks, cognitive architectures, logic solvers, and of the latter category include natural language processors, knowledge representation, and intelligence software agents that achieve goals through learning and decision-making.[21]

Across definitions, AI splits largely into two levels of intelligence: narrow AI, the level that AI has already reached, and general AI, the level that AI aspires to reach. Narrow AI represents AI systems that can resolve a set of discrete problems through algorithms that are capable of optimizing solutions in specific application areas. These include AI that can successfully play games, drive a car, translate sentences, or recognize images. We encounter narrow AI daily in our interactions with commercial services, including online shopping, targeted advertising, and various recommendation systems such as those employed by online streaming platforms, spam filters, search engines, and medical diagnoses.[22] And while these “AI systems will eventually reach and then exceed human-level performance” within the confines of the task they are designed to perform, narrow AI may not generalize a solution to produce AI behavior of general application across different tasks.[23] This is the aspiration of AI, often called general AI, which refers to an AI system that exhibits such behavior capable of performing a full range of cognitive tasks at least at the same level as a human.[24] General AI is not currently available and there is no clear expectation as to whether and when AI systems may reach this level of capability. This Article is concerned with AI systems of all levels of intelligence but currently focuses on narrow AI as this is the type of AI system whose actions courts are already and will be adjudicating in the near future.

AI is distinct from other conventional automated software and poses new challenges to the law due to its ability to self-learn by accumulating personal experience and generating solutions to problems based on an independent analysis of various scenarios without the input of a developer. This is called self-training.[25] For AI, there are no preprogrammed rules of engagement and resolution of a problem by a human. Rather, AI adheres to instructions on how to learn from the data it encounters as it operates.[26] This is a process that has begun to imitate the human experience, which in AI terms we call machine learning.[27] Machine learning is based on statistical tools and processes that begin from a body of data and a set of algorithms that then devise a rule or procedure to make sense of this data or predict future data.[28] Machine learning algorithms largely use statistical inference tools to identify risk, predict error and minimize it, assign weight to variables, and ultimately optimize outputs.[29] For instance, one may give a machine learning algorithm data such as a person’s age, favorite music genres, and artists, and task it with predicting a playlist with one’s favorite, or about to be favorite songs. To do so, the algorithm will look through thousands of people with various similar and dissimilar characteristics and their preferences to devise a model.

There are different types of machine learning, the particularities of which fall beyond the scope of this Article. The biggest distinction is between supervised and unsupervised machine learning. Much of machine learning is supervised, in that algorithms are first given directions as to what the optimal inputs and outputs for a problem are, and then are left alone to find the best ways to get there.[30] There is, however, also unsupervised machine learning in which algorithms infer patterns from datasets without any direction or reference to goals or outcomes to uncover hidden patterns in them.[31]

The machine learning process generally looks like this: developers give a learning algorithm a given data set on which to train.[32] The developer chooses a model that is usually represented by a mathematical structure and allows for “a range of possible decision-making rules with adjustable parameters.”[33] Developers will often define outcomes on the basis of their desirability based on the choice of a certain parameter and attach a reward for when the algorithm correctly identifies the parameter that yields the optimal solution for the model. This teaches the algorithm to adjust for the parameters that maximize its objective function.[34] This type of reinforcement learning turns the process into “experience-driven sequential decision-making.”[35] Once the algorithm has been trained, then the goal is for the algorithm to be able to generalize this model beyond the training data set into new cases where the model has application but that the algorithm has never seen before. This is much akin to how learning for a human being starts and develops: first with no real knowledge and then by accumulating experience on the basis of positive or negative feedback and adjusting preferences, choices, and values—what for the algorithm are parameters—accordingly.[36]

Machine learning has advanced significantly thanks to the technological successes of artificial neural networks. These networks, in turn, are possible largely thanks to increases of capacity in large-scale computing and big data.[37] Neural networks mimic the human brain and are comprised of many “neurons,” which are numerical states connected through “links” that serve as communication channels among neurons.[38] Layers upon layers of these neurons form into a web that creates “deep” neural networks.[39] This approach to machine learning has become known as “deep learning.”[40] What is particularly interesting about these networks is that not only do they represent a complex system of processing and transmitting information but also that they are adaptive systems with the capacity to change their internal structure on the basis of new information, establishing their ability to self-learn through experience.[41]

Imagine an artificial neural network algorithm intended to identify credit fraud. The algorithm pulls in data from records of financial institutions, the web, and social media, and is able to build a network of information containing users, retailers, senders, credit scores, and IP addresses for thousands and thousands of clients, but also draw connections among this data, a process that would take a human years to complete. On this information, the algorithm can build an added layer for identifying when a pattern in the data warrants investigation for fraud or even the need to freeze a user’s account. In a different example, these could be neural networks that are capable of identifying among billions of photographs the face of a specific person sought by law enforcement after learning, through training and internal adjustments, to correlate a specific input, that is the characteristics of a person, with an output, that is the image of a person’s face.[42] Deep learning is in use extensively in commercial and noncommercial AI technology and has particularly facilitated, among others, image recognition and labeling as well as “audio, speech, and natural language processing.”[43]

This ability of AI to self-learn from its own experience and undertake independent decision-making has led a large contingent of scholars to criticize the state of the law that treats such AI entities as objects. Instead, they raise the possibility and, at times, call for such AI systems to become subjects of the law through conference of legal personhood.[44] While the ultimate goal of this Article is to answer the question of whether such AI systems can, in fact, be granted legal personhood, it is important to view the argument these scholars raise within the context of the particular challenges AI presents for the law that attach to the question of legal personhood. Deep machine learning algorithms are challenging for the law for three related reasons: they are unpredictable, they are opaque, and they are increasingly autonomous. These embedded characteristics of AI clash with legal reasoning and, more particularly, with notions of causation, fault, intent, and eventually liability.[45]

First, the very intelligence of these AI systems depends on their ability to self-adjust and create new behaviors on the basis of their own experience without relying on explicit programming.[46] The consequence is that their behavior is, to a high degree, unpredictable even to their original developers. This means that the AI may engage in activities that were unforeseen even by those who created it.[47] For instance, the AI system may reach a decision that is counterintuitive to humans by finding an obscure pattern in its data and thus engage in conduct that a human would not have engaged in such as discriminating against a certain population, speeding in a car, or selecting a market-manipulative investment trading strategy.[48] If an error occurs that is legally significant, it cannot necessarily be traced back to the intent or fault of the developer if there has been no malice or fault in the original programing. Second, AI has the potential to act unexplainably; that is, the algorithms’ paths to a decision are often either undiscoverable or hidden behind trade secrets effectively instituting a “black box.”[49] This means that a developer or owner cannot review the process the algorithm followed to resolve it and courts can’t easily look for the intent or fault of the developer or owner to assign responsibility to the human behind the algorithm as one would with an automated program that operates deterministically.[50] Even in instances where one may identify the processes that the AI followed, the algorithm is not able to articulate in terms understandable by us why it is that it reached a certain outcome.[51] For instance, a trading algorithm can tell us whether it has been able to maximize profit but not if it succeeded in doing so through manipulating the market.[52]

Finally, these qualities become more problematic when these algorithms, due to their increased autonomy, are not controllable even by their own developers.[53] Consider instances in which an AI causes a transgression without direct human control, thus acting autonomously. An algorithm, for instance, built to commit identity fraud that is reducible to a single developer but continues to pursue criminal activity even after the individual responsible for its original creation is removed.[54] Such autonomy is given to many AI systems by design.[55] And while conventional automata may exhibit some levels of these qualities, the scale and degree in which these qualities appear in AI systems make AI a distinctive phenomenon for the law that requires, if not different, certainly particularized legal responses.[56]

This brings us back to the main question that currently preoccupies the scholarship on the intersection of law and AI: when an AI system causes legal transgressions, who is legally responsible?[57] While an intuitive answer would be to trace legal responsibility to owners and developers through some existing version of tort or criminal liability for those automated entities that rely on deterministic programming, understanding how AI systems function illuminates the challenge that currently preoccupies scholarship: If AI entities have the capacity to learn as they operate and are autonomous, inexplicable, and unpredictable, can they also be individually legally responsible?[58] Since increased autonomy renders causation links between the AI and its developers harder to establish for outcomes they could neither control nor predict, do AI entities—apart from their developers—have the capacity to be held legally responsible for violating the law?[59] This has become known as the responsibility gap of AI. [60] I argue that this responsibility gap depends upon the question of legal personhood, which I address in the next parts of this Article.

Western legal traditions have developed the concept of legal personhood to more easily taxonomize the entities that can act in law.[61] Being human is not a necessary condition of having legal personhood.[62] Entities that enjoy legal personhood have, for a long time, included not only humans but also artificial entities such as corporations, trusts, and associations which the law treats as though they are one single entity, one single person.[63] In U.S. scholarship, John Chipman Gray put forth what has become the classical discussion on legal personhood by arguing that, within the law, the concept of a “person” deviates from the folk understanding of a human and instead describes a “subject of legal rights and duties.”[64] These legal rights can be both substantive and procedural and generally span from constitutional rights and liberties to more reducible rights and duties such as the ability to sue and be sued, or the right to own property.[65] An examination into whether a particular entity can be considered to be a legal person often carries with it the normative question of whether this entity should be subject to these legal rights and duties as well as the pragmatic question of which of these rights and duties ought to be conferred on this entity to advance the purposes of the legal system.[66]

Despite the fact that the concept of legal personhood is almost as old as the legal system itself, its meaning is far from uncontroversial. What determines whether an entity has legal personhood? While most human beings will almost intuitively pass the test of being legal persons, courts still disagree on what factors yield legal personhood and how legal personhood is acquired.[67] And while the concept of legal personhood remains controversial even as regards its subject par excellence, the biological person, as we move further away from the narrow sphere of the adult human and into the periphery of the concept of legal personhood, we encounter less and less coherence at both the jurisprudential and philosophical levels.[68]

What is the status and bundle of rights enjoyed by nonbiological entities, also known as artificial entities? The term “artificial person” was first defined in federal statute in the Federal Dictionary Act of 1871, which gave rules of construction stating that “the word ‘person’ may extend and be applied to bodies politic and corporate . . . unless the context shows that such words were intended to be used in a more limited sense.”[69] The Federal Dictionary Act later became Title 1 of the United States Code that is now entitled the Dictionary Act, and courts’ treatment of this Act has ranged from using it as a “tool of last resort to a presumptive guide.”[70] The Act’s legislative history further suggests that its purpose was “to avoid prolixity and tautology in drawing statutes and to prevent doubt and embarrassment in their construction.”[71] The scarce usage of the Dictionary Act is one of the reasons why most scholarship and judicial decisions on the issue of legal personhood have resorted either to intuition or theory building in answering questions of legal personhood without providing a legally coherent and replicable framework of what the concept of legal personhood entails.[72]

An added challenge to this convoluted landscape comes from the new disruptive technology surrounding AI.[73] The law deals with new societal developments either through novelty or by analogy.[74] The pertinent question here is whether AI entities can be legal persons for the purposes of their confrontations with the legal system. To shed some light on the hazy concept of legal personhood, this Article will combine legal theory accounts of legal personhood with empirical data from U.S. caselaw in an effort to identify a set of defensible conditions that define legal persons. Then this Article will consider these conditions and characteristics with respect to AI entities to assess the feasibility and desirability of granting legal personhood to these entities.

To deal with the puzzle of conferring legal personhood on different entities, courts and scholars have long debated underlying theory. Legislators and courts very scarcely provide reasons for conferring legal personhood on a particular entity and will even do so on an ad hoc basis.[75] What becomes clear, however, is that legal personhood is a divisible aggregate of rights and duties.[76] As it is reduced to bundles of rights and duties, the exact number and kind of rights and duties an entity with legal personhood may enjoy can vary. Even amongst established legal persons such as human beings, legal systems have created categories of humans with more or less rights and different sets of obligations.[77] Consider, for instance, the rights enjoyed by an adult human to those enjoyed by a child. By analogy, artificial entities also fall on this spectrum and have often been conferred legal personhood with more or less restricted bundles of rights and obligations.[78]

The debate regarding legal personhood of artificial entities amongst legal scholars and ethicists has largely philosophical roots. Typically, it involves the exercise of sketching up a set of qualities or conditions that an entity must enjoy to be recognized as a legal person and a rationale on how these new legal entities compare to those entities to which the law has thus far conferred legal personhood.[79] Naturally, this question also tends to concern the quantity and quality of rights and duties that these legal entities will enjoy compared to the generally available universe of rights and duties.[80]

The primary example of an artificial entity that enjoys legal personhood with an increasing set of rights and duties is that of corporations.[81] With corporations gaining legal personhood, the debate shifted from its original concern with the human-based nature of legal personhood to questions of which artificial entities satisfy necessary conditions to enjoy legal personhood.[82] The attributes that satisfy these conditions for artificial entities are also a significant part of the debate surrounding AI entities gaining legal personhood.[83]

This Article will begin to tackle the question of legal personhood for AI by reviewing theory regarding the conceptual grounds and processes that yield legal personhood to artificial entities. AI brings a major transformation to the category of artificial entities. AI entities are capable of making decisions independently from humans, in other words, they are capable of “artificial actions” with “artificial consequences” that can be positive or negative, lawful or unlawful.[84] This is a development that the law cannot ignore. By analyzing the legal landscape of artificial entities, this Article will build a foundation from which to assess legal personhood for AI entities.

There are three main distinguishable theories that aim to explain the way the legal system approaches artificial entities through conferring legal personhood on them. As corporations have been the first and most prominent paradigm of this status conferral, these theories of personhood often use corporations as their main point of reference.[85] The first theory is known as the “fiction theory.”[86] Legal personhood for artificial entities is a positive law construct that the law attributes to certain entities.[87] This means that it is different from reality: it is a fictitious way of saying that an artificial entity is not a person but the law approaches it as if it were to allow these entities to act within the confines of this legal fiction.[88] As the U.S. Supreme Court wrote, “[T]he corporate personality is a fiction, although a fiction intended to be acted upon as though it were a fact.”[89] While some courts[90] and scholars[91] take this legal fiction often at face value, others argue that its establishment is of a consequentialist nature: Courts confer legal status on artificial entities such as corporations as an easy way of conferring on them legal rights and duties which address certain needs of the legal system.[92] Following this approach, the question of extending legal personhood to other artificial entities beyond corporations would then be based on the pragmatic question of whether these entities enjoying legal personhood would further the purposes of the legal system.[93]

Related to this line of argument is the symbolist or aggregate theory.[94] The law gives artificial entities legal personhood as a shorthand for representing and conceptualizing the relations between the natural persons (who are members of the artificial entity) and the entity itself, as well as relations between the entity and the world.[95] For instance, instead of having to contract with separate persons who are all members of the same corporation with equal shares, one can say that one contracted with “AllCorp” and therefore have gained legal rights and obligations towards the corporation.[96] In other words, a legal person is the sum of the natural persons that are its members.[97] This theory is, on its face, incompatible with the way we currently understand AI entities today as individual unitary entities.

The realist theory stands in opposition to both of these theories. The realist theory rejects the idea that legal entities are fictions or symbols and instead perceives them as objective entities that exist beyond the law that the law takes account of and personalizes.[98] The realist theory is based on the premise that artificial entities that are independent, autonomous and act with real effects in the legal realm such as owning property or performing transactions have long existed.[99] These have included churches, trade unions, and private incorporations.[100] To treat their existence as a legal fiction entails the internal fallacy that if legal entities are products of the law they may not exist before the law produces them.[101] Instead, proponents of the realist theory posit that artificial entities exist before the law grants them legal personhood and continue to exist as legal persons upon conferral of legal personhood.[102] Thus, they are social entities that exist as a whole in addition to and irrespective of the existence of their individual members.[103] Finally, Peter French has advocated for a fourth theory that is based on the idea that certain artificial entities such as corporations may even be treated as a moral person and have natural rights because they can act intentionally.[104]

Courts have engaged in discourse that fits either theory at different times.[105] As is evident by the theoretical disaggregation, there is not one generally accepted theory of legal personhood nor basis for identifying when an entity can be considered a legal person.[106] Instead, courts unsystematically assert various conditions that may result in the emergence of legal personhood for an entity,[107] or rely on circular analysis asserting the legal rights of an entity because it is a legal person without first establishing what makes it a legal person.[108] I will address this issue of circularity in more detail in the analysis of my empirical findings.

D. The Corporations Paradigm

Federal law recognizes in its definition of “person” artificial entities that take the form “corporations, companies, associations, firms, partnerships, societies, and joint stock companies.”[109] This is the product of a long theoretical discourse that has puzzled legal scholars for years as to whether a corporation could and should be regarded as a legal person separate from its shareholders and managers.[110] This distinct legal personhood makes it possible for the legal system to provide rights and duties directly to the corporation as well as hold it accountable for its actions irrespective of the actions of individual members and without necessarily holding individual members accountable.[111] “Corporations can own property, sign contracts, and be held liable for” breaches of the law as well as be found criminally responsible for several offenses,[112] and enjoy constitutional guarantees.[113]

The existence of corporate personhood has prompted many scholars to draw comparisons with regards to the assignment of legal personhood to other artificial entities such as AI entities.[114] Indeed, an analogy between an artificial person such as a corporation and an AI entity is more congruous and easier to conceptualize than a direct analogy to a biological person.[115] What is more, the framework of corporations’ legal personhood establishes an entity that is capable of a limited bundle of rights and duties compared to biological persons, something that sets out a useful paradigm for analogies drawn with AI entities, given the elastic nature of partial or ad hoc legal personhood.[116] But, unlike corporations, AI entities are neither “‘fictional’ entities” nor associations of natural persons, and the potential application of legal personhood to these entities makes it important to more systematically consider the ontological and normative justifications of legal personhood.[117]

The law resorts to analogy when there are no better ways to interpret or resolve a new legal phenomenon. I propose that the epistemologically preferable way to begin resolving this legal personhood puzzle for AI entities is to, in fact, resist analogy and resort to empirical analysis instead. Approaching legal personhood from a conditions-based perspective, as it has emerged through legal doctrine, can offer sets of factors that courts seek to identify to confer legal personhood.[118] Rationales may extend from inherent identity distinctions to public interest reasons and economic or social pragmatism that will be helpful in establishing a defensible argument as to whether AI entities can have legal personhood or not when courts are presented with an AI entity in litigation.

Before engaging more with the question of legal personhood for artificial entities, and specifically AI entities, it is important to consider whether there is such a thing as legal personhood and whether it is a necessary condition for these entities to exist in the eyes of the law.[119] Indeed, unlike the concept of a “natural person,” there are few parameters either in statute or doctrine as to what constitutes a legal person, or what being a legal person means. States are given broad authority and discretion to decide the entities upon which to confer legal personhood and to define the legal consequences of this act in terms of the rights and duties these entities get to enjoy.[120] The U.S. Constitution, though it utilizes the term “person,” provides no definition for it.[121] The U.S. Supreme Court has dealt with questions that implicate legal personhood to some degree but without systematically addressing either the definition of the term or the factors necessary for qualifying an entity as a legal person, particularly with respect to artificial entities.[122] Federal and state statutes add to this unclarity by presenting a fragmented landscape regarding the notion of legal personhood through extending it haphazardly and without coherent rationale.[123]

AI entities have already triggered questions of liability and responsibility and have forced the legal system to grapple with issues that challenge the boundaries of the law covers in ways that would have been inconceivable a few years ago.[124] In turn, the more popularity and accessibility AI entities gain within society, the higher the likelihood that more legal issues will arise, presenting questions of liability. This means that legislatures and courts will continue to be faced with the challenge of rethinking the nature and legal basis of liability for AI entities.

From a formalistic standpoint, it “may seem nonsensical” to even engage in the conversation of placing AI entities such as algorithms, devices, or robots directly under our laws, given that they are not only nonbiological entities but also inanimate objects.[125] And it may well be true that they do not fit easily into existing paradigms. However, we will never really have an authoritative answer as to whether AI entities can be subjected to our laws as individual entities unless we examine and evaluate in-depth the notion of legal personhood and how legislatures and courts have interpreted and applied it in the past. Moreover, formalistic responses are often overcome upon further reflection: consider, for instance, how unconvincing the idea that slaves could be legal persons would sound to someone living in ancient Greece or the segregated South.[126] That is why the formalistic response should not end the inquiry.

We already find ourselves facing legally difficult situations caused by AI entities and consequent liability questions: An AI shopping bot that was part of an art installation and was given $100 a week to spend decided to buy MDMA pills and a passport on the dark web.[127] Naturally, authorities had a unique decision to make in charging for this transgression.[128] Tesla cars have been crashing into trucks when operating under the AI-based, semi-autonomous pilot system resulting in deaths.[129] An Uber self-driving car crashed into a pedestrian, killing her after erroneously classifying the pedestrian as a bicycle and deciding not to react immediately to avoid a collision.[130] The Arizona prosecutor in charge of the case decided not to press criminal charges against Uber for the death of the pedestrian due to an insufficient basis for corporate criminal liability under existing criminal statutes, and civil liability was “resolved” outside of court.[131] Similarly, developments in online sales and purchases of goods which happen increasingly by AI bots through “smart” contracts have brought the legal personhood of AI entities in the forefront in private law: If AI entities can contract in their own name, can they also be sued for breach of contract or tort without having to impose liability on a natural person behind them?[132]

Instances like these have prompted more proposals about extending legal personhood to AI entities. The most significant one was the request of the European Parliament to the European Commission to draft legislation addressing forthcoming legal challenges of AI entities in light of their increasing sophistication and autonomy by establishing a new status of legal personhood, that of electronic personhood.[133] The suggestion for electronic personhood was intended to facilitate the ascription of civil liability for instances in which AI entities make sufficiently autonomous decisions independently.[134] This would prompt new civil liability rules for electronic agents in which liability would be shared by all parties involved in the AI entity, such as the AI entity itself, the engineers, and the manufacturers along a continuum.[135] The different levels of autonomy the AI had exercised in the particular wrongful act would dictate the levels of liability to be allocated among the various parties.[136] Of course, such a sensitive proposal could not be met without controversy. In response to the proposal, a significant number of AI experts sent an open letter to the European Commission cautioning that “[f]rom an ethical and legal perspective, creating a legal personhood for a robot is inappropriate whatever the legal status model.”[137]

The unsystematic development of the notion of legal personhood also means that there are no set criteria or conditions for its conferral on certain entities.[138] The wide discretion legislatures and courts enjoy in determining who is given legal personhood and on what grounds allows scholars to contemplate potential criteria.[139] The downside of such a lax approach is the inability to clearly assert whether a novel entity has the criteria to qualify for legal personhood. In this state of flux, scholars have considered various conditions as relevant for legal personhood. These conditions are largely intuitive, nebulous, and, at times, overlapping. In the debate about the personhood status of AI entities, an aggregate list of such conditions in legal scholarship consistently includes concepts of intelligence, autonomy, and awareness. These concepts focus on the AI entities’ ability to learn through experience and adapt to the environment without the help of third parties.[140] Predictably, scholars not only disagree on which conditions should make it to the list, but also on what the concepts that represent these conditions really mean.[141] What is more, arguments in favor of a certain condition or against a certain condition are often premised on either philosophical or pragmatic grounds. Each approach yields different conclusions in theorizing the nature of legal personhood and its general or specific application to entities that are currently in personhood limbo.[142]

This debate relates in its core to questions about how the law is made and who it intends to serve. Legislatures function with a targeted audience in mind and the law largely employs the folk psychology model of human action in its effort to regulate the behavior or mental state of its subjects.[143] The most basic folk psychology argument regarding AI entities is based on the existence of a certain intangible “something” that is essential for personhood: be it mind, soul, feelings, intentionality, consciousness, or free will.[144] If this intangible parameter is what makes a “person” in the eyes of another, when it is missing, it is difficult for the commonsense human to conceptualize personhood. This is why scholars’ often-drawn analogies for AI legal personhood sometimes border on anthropomorphism.[145]

Yet analogies become problematic upon further reflection. Legal personhood conditions based on anthropomorphic intuitions would suggest that if AI entities look and act a certain way that resembles the human way, then we would more likely extend legal personhood to them.[146] While AI entities may not currently have the levels of intellectual or emotional capacity of humans, they often exhibit human-like behaviors that are indistinguishable from those of humans. Consider, for instance, the chatbots that operate on websites or phone lines. For all intents and purposes of their given task, they are practically indistinguishable from the human clerks they are supposed to replace.[147] Yet AI entities currently do not explicitly enjoy legal personhood. To escape this circular reasoning, we need to empirically identify the conditions for legal personhood on the basis of which artificial entities have been granted such personhood and relate them to the existing theoretical list of conditions advocated in legal scholarship and policy: autonomy, intelligence, and awareness.[148] I start with the first.

1. Autonomy

Autonomy is a condition that many scholars present as integral to the conferral of legal personhood on artificial entities, and it is a condition that is integral to AI in its own right.[149] Autonomy is also a concept that has been the source of significant misunderstanding among legal scholars and policymakers, particularly with regard to its relationship with the concept of automation.[150] The shift from automation to autonomy in technology peaks with AI entities that are fundamentally different from other digital software and ordinary computer algorithms due to their ability to learn independently, compile experience through learning, and produce outcomes separate from the intention or will of their developers.[151] AI entities can receive input, set goals, assess possible outcomes, and calculate the possibility of success without any human control.[152] In other words, autonomous AI entities “‘sense-think-act’ without human involvement.”[153]

This aspect of autonomy represents the ability of an entity to establish and modify inner states without any external stimuli such as human intervention.[154] The ability of AI entities to respond to environmental stimuli they perceive through sensory inputs and either change their own inner states or alter and improve the rules on which their inner states are based allows AI entities to perform autonomous decision-making, which is often argued to be a condition for legal personhood.[155] This is because we often associate autonomy closely with responsibility and responsibility with free will.[156] When most people are asked to point the difference between humans and animals, the notion of “free will” comes up very frequently in that human actions are not necessarily predetermined, but we are able to exercise control over them and affect their course.[157] Incompatibilist moral philosophers stress that, in order for people to be held responsible for their actions, they must have freedom of choice between alternative options as one may not exercise free will without the presence of alternative possibilities.[158] For the purposes of the legal system, this aspect of autonomy suggests that if an AI engages in an act, this act may not be reducible to a person for liability purposes, leaving attribution and punishment up in the air.[159]

The second aspect of autonomy is associated with agency and the idea that an autonomous entity is capable of understanding higher level intent and direction and to “shift from low-level control towards higher order functions.”[160] In this sense, autonomy represents a continuum of less or more autonomous entities, the levels of which are based on how successfully the entity can represent this shift from automation (think of a parking assistant camera) to full autonomy (akin to the decision-making capacity of a human).[161] Regulatory agencies tasked with qualifying autonomy in AI entities have already recognized this continuum. For instance, the U.S. Department of Transportation, in the Federal Automated Vehicles Policy, distinguishes six levels of autonomy based on what tasks the AI exercises in driving. These levels range from “Level 0” when the human driver performs all driving tasks to “Level 5” in which “the automated system can perform all driving tasks, under all conditions that a human driver could perform them.”[162]

The reason why autonomy matters for legal personhood for scholars is because of its connection to legal responsibility and liability.[163] Consider the case of autonomous driving. Many accidents leading to injury or death to third parties happen due to vehicular negligence on the part of the driver.[164] Now imagine that the AI is able to perform the driving tasks at the level of a human driver (Level 5 or above) and an incident that leads to third-party injury or death takes place. The level of autonomy of the AI is particularly instructive for the distribution of legal responsibility and liability.[165] This doesn’t only apply to autonomous vehicles. Think of the AI social bots, algorithms capable of communicating and interacting with humans socially.[166] These social bots have the autonomy to make decisions about how to approach a situation including, oftentimes, instances in which legal agreements are made between the bot and the human such as a sales contract. Similarly, liability arising out of the nonexecution of this contract, or of potentially abusive terms within the contract, stems from the level of autonomy of the AI involved.[167] Ultimately, the autonomy of AI entities raises questions about their legal nature both under epistemological as well as pragmatic aspects of legal personhood.[168]

2. Intelligence

Autonomy is often conflated with intelligence; however, the two concepts are distinct and not contingent upon each other, even as proposed conditions for legal personhood. The reason for this conflation is that autonomy is necessary for intelligence to manifest,[169] and intelligence represents a manifestation of an entity’s capacity for learning.[170] The contemporary debate on AI intelligence is still premised, in part, on Alan Turing’s infamous test and the counter-tests that followed.[171] Turing tested the possibility of a computer that behaves so intelligently that a person cannot tell it apart from another human.[172] Turing would place the computer in question in an imitation game with a human opponent. A third person asks questions to both the computer and the human on any subject the person chooses without being able to see either.[173] Both opponents are tasked with convincing the third person that they are human and that their opponent is not.[174] At the end of the game, the third person will have to make an educated guess on the basis of the line of questioning as to which of the two players is human and which is not.[175] Turing argues that insofar as a machine is able to “fool” a human about its status at least half of the time, this machine would have to pass as intelligent, without discussing any conceptual qualifications of intelligence.[176]

John Searle based his own test on this analytical weakness of the Turing test.[177] Imagine a person who is locked in a room and is receiving pieces of paper with Chinese writing scribbled on them. This person has a rulebook with rules on how to identify Chinese characters and transpose them to English, and with it is able to send back out of the room the translation of the Chinese scribbles into English.[178] Whoever sits outside of that room receiving the translations will assume that the person in the room understands Chinese. However, the person in the room is merely processing instructions on identifying Chinese symbols.[179] With this test Searle argues that simple information processing and true understanding are distinct because only true understanding can attribute meaning, and meaning is necessary for intelligence, as opposed to a mere input-output process that only imitates understanding.[180] The struggle with the concept of intelligence in AI entities is not dissimilar to the difficulty we have with the concept of personhood. Again, as Searle’s counter-test tried to prove, humans are prone to looking for some greater intangible property, be it the mind, intentionality, or consciousness, that has meaning outside of and beyond the operational process of sensory input and output.[181] On this basis, we tend to tie our notions of personhood to this intangible property that we associate with intelligence.

Legg and Hutter emphasized the importance of defining intelligence for AI entities if we are to have a comprehensive understanding of intelligence. Recognizing the absence of consensus as to what intelligence means, they performed a quantitative analysis of informal definitions for human intelligence across scientific fields and concluded that, throughout cognitive sciences, “[i]ntelligence measures an agent’s ability to achieve goals in a wide range of environments.”[182] Just like autonomy, it is helpful to view intelligence in a continuum instead of a binary.[183] What we consider “intelligent” or “smart” in artificial entities varies. Consider two entities with an ability to achieve goals in a given environment. One of them is able to achieve more goals, or the same number of goals more expediently, or can draw insights from multiple other environments for the same number of goals than the other.[184] We would naturally qualify this entity as more intelligent than the other.

AI theory represents this by putting forward the distinction between “weak” and “strong” intelligence, but this distinction can be given more nuance if one is to bring into it the notion of autonomy.[185] First, consider an entity that is intelligent in that it is able to pursue and achieve any goals that the programmer has predicted for it in a particular way.[186] Now consider another entity that has the ability to pursue general goals semi-autonomously, but instead of a programmer having determined in advance a distinct way of success or sets of subgoals for it, the entity, by mining new data, is able to learn how to set and achieve certain goals through supervised training.[187] Finally, consider an entity that is capable of doing all that fully autonomously by training itself without any human supervision. These three scenarios represent the three points in the continuum of intelligence that an AI entity can reach on the basis of how autonomous it is.[188]

For the purposes of legal personhood, intelligence understood in a continuum often correlates with an entity’s capacity to understand and exercise certain legal rights and duties.[189] Consider this within biological persons. The law confers limited personhood on children below a certain age due to their limited developmental capacity.[190] The law equally confers limited legal personhood on biological persons with severe intellectual disabilities.[191] As with the entirety of the notion of legal personhood, the law implicitly infers patterns and conditions for legal personhood on the basis of desired outcomes, without contemplating ex ante what these patterns and conditions are or what they mean for the notion of legal personhood.

3. Awareness

Intelligence and awareness or consciousness are concepts often intimately connected in debates in the scholarship over attributing legal personhood to new types of entities.[192] Scholars often use the terms “awareness” and “consciousness” interchangeably and without much definitional certainty.[193] However, the legal system has not traditionally treated consciousness as a condition for legal personhood but rather as a circumstance that may affect, in certain instances, liability.[194] Natural persons who are asleep, in a coma, or who experience temporary loss of consciousness are not deprived of their legal personhood on the basis of lacking consciousness.[195] Instead, these factors may result in a determination that they are not liable or are less culpable.

Scholars also sometimes link the notions of awareness and consciousness with the notion of intentionality.[196] Searle, for instance, writes, “The ascription of an unconscious intentional phenomenon to a system implies that the phenomenon is in principle accessible to consciousness.”[197] I wish to push back on this linkage both for the purposes of facilitating an understanding of legal personhood for artificial entities conceptually as well as being legally consistent with prior understandings. Notions of awareness or even consciousness are distinct from notions of intentionality. Richard Posner explains why this is by arguing that an overlapping understanding of awareness and intentionality would lead us to conclude that, for example, railroad managers “are murderers because they” have a high degree of certainty, which includes awareness, “that their trains will run down a certain number of people” at crossings per year.[198] While they may be aware of this problematic outcome, “they derive no benefit” from it, nor do they invest any resources in bringing it about.[199] Intentionality, understood through this example, is the desire to bring about an outcome by investing certain resources in its pursuit.[200] Awareness, on the other hand, requires no such investment or pursuit.

Though awareness is distinct from intentionality, it is still a component of it in the sense that one need be aware of an act that one is performing. One’s state of awareness is knowledge of what one is doing or is capable of doing.[201] In turn, awareness makes agents capable of intentional action.[202] Joel Feinberg and Bonnie Steinbock extend the notion of intentional action to the ability of having “interests,” that is the capacity to have a stake in things, which belongs only to entities that are consciously aware.[203] Legal personhood then becomes the means through which one may claim or protect these interests but also be accountable for violating the interests of others.[204] This notion of awareness becomes the basis for accountability since our legal system is based on the premise that entities aware of their actions may also be held to account or hold others who are aware similarly accountable.[205]

4. Moral Personhood

Legal personhood represents arguably “the widest class of persons” encompassing both natural and artificial entities and granting them the ability to act under the law, such as to contract, sue for damages, or be subjected to certain coercive measures.[206] There is, however, another class of persons, the moral person, the status of which in relation to legal personhood remains unclear. Interestingly, the term “legal person” historically developed in juxtaposition to the term “moral person.”[207] Though it is not the purpose of this Article to delve into ontological questions of moral personhood, aspects of moral personhood often come up as a precondition to an entity’s capacity for legal personhood, particularly within the realm of criminal accountability and punishment. Some authors argue that legal accountability and, particularly, criminal accountability raise moral issues and require agents to be capable of culpable behavior.[208] This is because, at least largely in the context of desert as a distributive principle of criminal liability, the notion of accountability is entangled with that of moral responsibility and the idea of individual blameworthiness.[209] The argument then follows that moral personhood isn’t just a subclass of legal personhood that can be found in distinct agents but a precondition to legal personhood insofar as we accept legal personhood to be the basis for legal accountability.

Though this Article’s empirical analysis will assess whether courts actually look at moral personhood as a condition in assigning legal personhood, I wish to push back on the suggestion that moral personhood is a precondition to legal personhood on the basis of two interrelated grounds. First, moral personhood relates to moral status and not necessarily to the ability to be held legally accountable.[210] Second, our legal system includes many examples in which questions of moral responsibility do not hinder the capacity of entities to legal personhood and in which legal personhood is not necessarily always entwined with legal accountability. Francis Kamm has proposed that an agent has moral status because this agent “‘count[s]’ morally in [its] own right” and has permission to conduct itself “for its own sake.”[211] This is what, as Nick Bostrom suggests, distinguishes a human from, for instance, a rock.[212] The rock does not have moral status and that is evident by the fact that we may treat it any way we like.[213] We can choose to throw it, crush it, put it in our pocket, and “subject it to any treatment . . . without any [specific] concern for the rock itself.”[214] Conversely, entities with moral status carry legitimate interests that society has to take into account when interacting with these entities and may also involve a set of constraints regarding the sphere of possible actions that other entities may perform against those with moral status.[215] For example, entities with moral status may have an inherent right to their life, to their property, to their bodily integrity, and so on.[216] This moral status effectively dictates whether a behavior undertaken against an entity is morally good or bad and thus allows entities to distinguish right from wrong.[217]

To establish moral personhood as a precondition to legal accountability presumes that the only entities susceptible to punishment under the law are entities with moral status. Consequently, only entities with moral status can enjoy legal personhood as legal personhood is generally accepted as a precondition to legal accountability. However, our legal system is filled with instances in which entities with legal personhood don’t require moral status or any sense of moral responsibility.[218] For instance, corporations can be held liable and criminally responsible even though most agree that they don’t have moral personhood to deserve punishment.[219] On the flipside, infants, who have very narrow legal rights and responsibilities nonetheless have legal personhood.[220] Yet intelligent animals that have the capacity to learn right from wrong in the context of their relationship with their masters and can thus be punished for disobedience have not been accorded legal personhood.[221] Finally, strict liability regimes establish accountability absent from questions of fault entirely.[222] There is thus good reason in retaining, at least conceptually, skepticism over equating moral personhood with legal personhood.[223]

III. Empirical Analysis

A. Introduction

Robert Geraci, based on Lawrence Solum and Woodrow Barfield’s discussion, has suggested that as people increasingly interact with AI in their daily life through often anthropomorphic conceptions, they will be tempted to grant it legal rights and duties.[224] As I discussed in the first part of this Article, lead scholars argue that since AI is intelligent, has sufficient levels of autonomy in making decisions, and is sufficiently aware to learn from own experience and to interact with other legal subjects, it may be granted legal personhood.[225] But legal personhood is a legal and not factual or normative status, and to fully answer the question of whether AI can have legal personhood we need to shift our attention to the law. Doing so will not only provide us with a clearer legal standard but will also allow us to assess the degree to which the conditions that scholarship focuses on for acquisition of legal personhood track the ones courts have considered in attributing legal personhood to artificial entities.

This Part complements the discussion of legal personhood of Part II with an empirical legal analysis. In doing so, it goes beyond the main arguments of the scholarship that assess legal personhood for AI on the basis of “missing-something” arguments relating to theoretical concepts of personhood such as the ones described above of awareness, intentionality, and autonomy.[226] Empirically assessing whether there is consonance or dissonance between theory and practice on the notion of legal personhood has the potential to enrich debates with tangible data reflecting the current state of the law. A clearer understanding of the legal framework will facilitate movement in theory, policy, and litigation in a direction that is more compatible with legal expectation.

This goal of this Article also informs the methodological approach it undertakes. This Article positions itself between metaphysical and condition-based approaches to inquiry. Whereas a metaphysical inquiry theoretically discusses the possible attributes that an artificial entity ought to possess to qualify as a legal person,[227] a conditions-based approach looks for conditions under which an entity is positively treated as a legal person by the law.[228] The first approach is interested in answering questions of why some entities have legal personhood over others, while the second approach acknowledges that legal personhood is a legal status and looks for the common denominators that entities with legal personhood share to establish a standard. This Article is premised on the idea that the best way to truly conceptualize legal personhood in U.S. law is to find the common area of overlap between these two approaches.[229] By undertaking both a theoretical and empirical quest, this Article merges the two accounts by placing them face to face and identifying whether there is commonality or not to better understand legal personhood regarding AI entities.

This methodology, however, carries an important limitation. This Article does not undertake a pragmatic consequence-based approach, an approach seeking to answer questions such as what consequences of legal personhood are desirable for its conferral as a legal status on an entity.[230] This is because the lens of this Article is more backward-looking in shedding light on normative debates and assessing their overlap with existing legal doctrine based on empirical findings. A consequence-based approach is largely forward-looking and policy-shaping, and therefore intended for a goal different than the one of this Article. But even though this Article doesn’t undertake this approach stricto sensu in its methodology, the discussion of the data will carry, at times, pragmatic undertones given the socio-legal challenges that AI entities bring to the legal personhood debate.

This Article aims to identify the factors that courts consider when deciding whether an entity is a legal person. The data collected reflects U.S. caselaw from the U.S. Supreme Court, federal courts of appeals, federal district courts, and state courts that have considered what makes an artificial entity a legal person. I use Qualitative Content Analysis (QCA) to code and analyze this data with the intention of revealing patterns in what conditions courts have considered most instrumental in resolving questions of legal personhood and identifying the frequency at which they consider certain factors.

The caselaw search into what makes an entity a legal person began broadly and narrowed progressively. I first performed a search using the search term “legal person,” specifically looking for cases that included mention of corporations and companies as artificial entities. I then searched the terms “legal person” and “artificial entities” together and excluded terms that related to fetuses and abortions to exclude biological entities. I excluded these terms because the conditions applied in biological persons cases used a different set of factors, terminology, and considerations than what I was looking for concerning artificial persons. I then narrowed the search of “legal person” and “legal person and artificial entities” down to before the year 1947 and the passage of 1 U.S.C. § 1. As stated by1 U.S.C. § 1, “In determining the meaning of any Act of Congress the words ‘person’ and ‘whoever’ include corporations, companies, associations, firms, partnerships, societies, and joint stock companies, as well as individuals.”[231] By narrowing my search before this date after the original wider search, I was able to focus on what courts determined was important in granting legal personhood to artificial entities before the passage of the statute that defined a legal person as including corporations and other similar types of artificial entities.

After performing an exhaustive caselaw search regarding legal persons and artificial entities, I began searching for other potentially relevant terms. The next search terms were “juridical entities” and “juridical persons.” By looking at these terms, I identified more caselaw on what conditions the courts considered when determining whether juridical person is subject to and recognized by the law. I searched the term “juridical entities” and “juridical persons,” first looking before 1947 and then expanding the search to include all years. When I stopped getting new caselaw through the searches, I recognized that I had completed the search and had an exhaustive list of the available caselaw.

C. Qualitative Content Analysis

QCA is a research method “for making replicable and valid inferences” from data to their context, with the purpose of providing knowledge and new insights.[232] By analyzing texts, one can develop categories that describe the phenomenon of legal personhood across courts and use these categories to identify legal personhood conditions. Using a deductive approach, I developed a structured categorization matrix to code the data according to categories, reviewed the data by applying this Article’s research question to it, and extracted relevant data for QCA.[233] I ultimately arrived at a final list of fifty-three cases narrowed from the initial search, including federal supreme, appellate, and district courts, as well as state supreme and lower courts. Then, I coded the data for thematic content to interrogate how U.S. courts approach questions of legal personhood.

Through QCA, I identified common legal grounds courts relied on to resolve questions of legal personhood and described them using code words compiled in a codebook. For this type of analysis, I focused on the subject matter of the cases and not the style, syntax, or other structure of the judicial opinions. I scrutinized the portions of the opinions that discussed the legal basis for finding an entity is a legal person when coding. This process enabled me to begin observing themes and patterns in the legal grounds that courts have used to define legal persons or resolve issues of legal personhood.

After the initial round of coding based on a careful distillation of the themes of the court’s discussion of legal personhood, I went through several rounds of feedback loops to ensure the codes were truly being applied to the identical legal basis for decisions across different cases. In this process, I also refined the codebook so that it accurately represented the court’s discussion of legal personhood while remaining succinct: Where multiple code words were identified as so similar that they were redundant, I collapsed them into a single code word. I also had several code words that were used only in a single case; however, these were still useful as they identified a basis for legal personhood that a court utilized that was not present in the other cases. After the final round of edits, I analyzed the final thirty-two code words into frequency distribution tables to assess the frequency in which each jurisdictional level had used a certain code word in adjudicating cases.

The codes used in this study that reflect conditions used by courts to determine legal personhood appear below along with an operational definition of each concept:[234]

  1. Right to property: whether the artificial entity has the right to own property.

  2. Right to transact: whether the artificial entity is able to engage in transactions in its own name.

  3. Context specific: whether the artificial entity enjoys legal personhood on the basis of some contextual element (this is the term used verbatim by courts).

  4. Analogous to natural person: whether the artificial entity has sufficiently similar characteristics to be considered akin to a natural person.

  5. Implicit in statute: whether the type of artificial entity in question, though not explicitly included in statute, could be inferred to be covered by statute.

  6. Legal accountability: whether the artificial entity can be held legally accountable.

  7. Capacity for debt: whether the artificial entity has the capacity to incur debt.

  8. Capacity for recovery: whether the artificial entity has the capacity to recover debt owed to it.

  9. Citizenship: whether the artificial entity is a citizen of a country.

  10. Constitutional rights: whether the artificial entity can enjoy constitutional rights.

  11. Fiat of the state: this is the term used verbatim by courts.

  12. Independent unit: whether the artificial entity forms a unit independent from a larger unit—usually in relation to governmental entities.

  13. Irrelevance of creator: the creator of an artificial entity is irrelevant for the determination of its legal personhood.

  14. Irrelevance of label: the label under which the artificial entity operates is irrelevant for the determination of its legal personhood.

  15. Irrelevance of shape: the shape of an artificial entity is irrelevant for the determination of its legal personhood.

  16. Irrelevance of size: the size of an entity is irrelevant for the determination of its legal personhood.

  17. Legal chart: whether the artificial entity has a legal chart that stipulates its regulation.

  18. Legal standing: whether the artificial entity has legal standing before courts.

  19. Made up of individuals: whether the artificial entity is an aggregation of natural persons.

  20. No autonomy: that the artificial entity not having autonomy is a negative qualification for the determination of its legal personhood.

  21. No self-determination: that the artificial entity not having self-determination is a negative qualification for the determination of its legal personhood.

  22. No self-representation rights: that the artificial entity not having self-representation rights is a negative qualification for the determination of its legal personhood.

  23. Perpetuity: whether the artificial entity can perpetually exist.

  24. Rights and duties: whether the artificial entity has general rights and duties (this is the term used verbatim by courts).

  25. Right to contract: whether the artificial entity has the right to contract with other entities.

  26. Right to counsel: whether the artificial entity has the right to counsel.

  27. Right to sue and be sued: whether the artificial entity has the right to sue and be sued.

  28. Self-representation: whether the artificial entity can represent itself in a court of law.

  29. Societal responsibilities: whether the artificial entity has responsibilities towards society.

  30. Spirit and purpose of statute: that legal personhood is attributed to an artificial entity by the spirit and purpose of a statute.

  31. Statute based: whether legal personhood for an artificial entity is based on a specific statute.

  32. 1 U.S.C. § 1: whether legal personhood for an artificial entity is based on 1 U.S.C. § 1.

D. Quantitative Data Analysis & Findings

 

The second part of this Article’s empirical analysis is descriptive and is based on several frequency analyses. In statistics, frequency represents the number of times an event occurs and analyzes, among others, measures of percentiles. Based on a total of fifty-three cases from both federal and state courts at the trial and appellate levels, there were thirty-two characteristics considered by the courts to determine whether artificial entities are legal persons. As a longitudinal matter, the majority of these cases are recent and when reflective or precedent represent the latest iteration by courts of the same jurisdictional level.

Out of these thirty-two characteristics, the top three most frequently considered characteristics among all fifty-three cases are that legal personhood is “statute-based,”[235] that it is reflected in an entity’s “right to sue and be sued,”[236] and that when a statute does not explicitly include a specific entity as a legal person that this entity can be otherwise read to be “implicit in statute.”[237] These terms showed up in 47%, 40%, and 15% of all cases respectively. Although the condition “implicit in statute” falls within the top three most important characteristics that determine whether an artificial entity is a legal entity, it is important to note that this characteristic appears in 25% fewer of the total cases than the second most important condition, the “right to sue and be sued.”

 

To determine whether the characteristics that are most important in determining whether an artificial entity is a legal person are consistent across various courts and federal versus state jurisdictions, I next categorized the fifty-three cases by court. There are five categories of case decisions: those decided by the U.S. Supreme Court, circuit courts, district courts, state supreme courts, and lower-level state courts.

 

Out of the fifty-three cases analyzed, ten of them were U.S. Supreme Court cases. Based solely on the ten Supreme Court cases determining whether an artificial entity is a legal person, the two most frequently considered conditions in making such a determination are “statute-based”[238] and “right to sue and be sued,”[239] which appear in 50% and 40% of U.S. Supreme Court cases respectively. Two conditions tied for third place: that the entity enjoy “citizenship”[240] and “constitutional rights,”[241] which each are discussed in 20% of the U.S. Supreme Court cases. Unlike the general sample size of cases, where the condition “implicit in statute” was among the top three most important conditions in determining whether an artificial entity is a legal person, this condition was not considered once in any of the Supreme Court cases. The eight other conditions considered in only one case each are an entity’s “right to property,”[242] “right to transact,”[243] that an entity be “analogous to a natural person,”[244] that an entity can bear “legal accountability,”[245] that an entity is based on a “legal chart,”[246] that is a legal person if it is “made up of individuals,”[247] that an entity enjoys “perpetuity,”[248] and “right to contract.”[249]

 

Eight of the fifty-three cases analyzed were decided by a U.S. circuit court. In cases decided by the circuit courts, the two most important conditions in determining whether an artificial entity is a legal person are “statute-based”[250] and “implicit in statute,”[251] which appear in 50% and 38% of circuit court decisions respectively. Two conditions, the “right to sue and be sued”[252] and an entity falling under “1 U.S.C. § 1,”[253] tie for third as the most important condition among circuit court determinations of legal personhood by appearing in 25% of the cases analyzed. The two other characteristics mentioned in only one case each are “citizenship”[254] and that an entity represents an “independent unit” separate from a larger entity that may encompass it.[255] It is interesting to note that circuit court decisions are the only subcategory of cases that includes the condition “implicit in statute” among its top three most important characteristics, which mirrors the trend reflected by the full pool of the cases.

 

Fourteen of the fifty-three cases analyzed were U.S. district court decisions. Among the fourteen cases, there are a total of seventeen characteristics considered to determine whether an artificial entity is a legal person. The three most important characteristics amongst these cases in determining whether an artificial entity is a legal person are the “right to sue and be sued,”[256] “statute-based,”[257] and the “right to contract.”[258] These characteristics were discussed in 50%, 36%, and 21% each.

 

Nine of the fifty-three cases analyzed were decided by state supreme courts. The nine cases discuss sixteen conditions to determine whether an artificial entity has legal personhood. The “right to sue and be sued”[259] and “statute-based”[260] are the two most frequently discussed conditions in state supreme court cases: both are discussed in 33% of the cases analyzed. There is not a clear second or third most important condition among these cases because there are five conditions that tie for second by being discussed in 22% of cases. These five conditions are “right to transact,”[261] “right to property,”[262] “analogous to natural person,”[263] “right to contract,”[264] and “constitutional rights.”[265]

 

Finally, the remaining twelve of the fifty-three cases analyzed were decided by lower state courts. This category includes intermediate appellate courts and trial courts. These twelve cases also address eighteen conditions to determine whether an artificial entity has legal personhood. The two conditions most frequently considered among this category of cases are “statute-based”[266] and “right to sue and be sued,”[267] which appear in 67% and 33% of cases respectively. The three conditions tied for the third most frequently discussed among lower state court cases are “made up of individuals,”[268] “rights and duties,”[269] and “independent unit,”[270] which are discussed in 25% of cases.

Overall, the top conditions that courts take into account when determining whether an artificial entity is a legal person are: whether legal personhood is conferred on the entity directly by statute; whether, if not conferred directly, legal personhood can be read implicitly in existing statutes for other entities, if the artificial entity can sue and be sued; and finally whether the entity is an aggregate of natural persons. As expected, a statutory basis for legal personhood is one of the two clearest ways to successfully argue that an artificial entity is a legal person. The other strongest way to argue in favor of legal personhood is by showing that the artificial entity can sue and be sued. Among all cases evaluated, there are thirty-two distinct conditions considered, some of which are only discussed in one of all cases. Therefore, despite some top conditions arising out of the quantitative analysis, there are many disparate conditions that courts look to when determining legal personhood.

 

Fortunately, there are observable trends in the data that provide insights into the formation and determination of legal personhood. The consideration of legal personhood conferral by statute was used in about half the cases and in the majority of U.S. Supreme Court cases. Similarly, the condition for capacity to sue and be sued was discussed in about half of all cases analyzed and in one-third of all U.S. Supreme Court cases. In some cases, this right to sue and be sued was identified as a right of legal personhood that was created by statute. These results show that legal personhood in the majority of cases lies in the hands of legislators. This is an important result given the relative scarcity of clear statutory provisions regarding legal personhood. The conditions the research identifies may provide a framework into what considerations a legislature ought to take into account before conferring legal personhood on entities other than corporations in the future.

My findings also warrant an approach to the question of conferring legal personhood on AI entities that is more skeptical and cautious than the proposals of certain lead voices in the scholarship for AI personhood and liability.[271] This more cautious approach is critical given the evident gap between scholarship, on the one hand, and practice, on the other, as my data reflect. A trend identified across all cases and largely in state courts and federal district courts is the condition that an entity be an aggregate of individuals to have legal personhood.[272] This condition is particularly instructive in the case of AI entities given that, unlike other artificial entities such as corporations, AI entities are not the sum of other legal persons. Consider how, as Part II of this paper discusses, theory focuses on the conditions of autonomy, intelligence, and awareness as key for AI legal personhood.[273] Perhaps with the exception of the condition for an entity to be “analogous to natural persons” that appears in 6% of all cases and notably in 25% of cases in United States courts of appeals, the data confirm that for courts, legal responsibility for artificial persons remains reducible to the natural persons that comprise them as the main source of their action.[274] What is more, conditions relating to autonomy, intelligence, and awareness are almost absent from the courts’ consideration of legal personhood for artificial entities. The only exception being autonomy, which is considered as a condition for legal personhood in 2% of all cases.

Another interesting pattern the data reveal is what I will call the circularity problem of legal personhood. Throughout the two Parts of this Article, I have argued that there is not only significant theoretical division on what the constituent elements of legal personhood are but also significant dissonance between theory and practice and within practice itself. Beyond identifying and compiling the conditions for legal personhood through the QCA, it is important to also address these conditions critically. Consider, for instance, some of the conditions that courts have looked for in deciding whether an entity enjoys legal personhood, such as whether the entity has the “right to sue and be sued,” the “right to contract,” or “constitutional rights.” While this may be a reflection of the realist theory of legal personhood, these are conditions that an entity that already enjoys legal personhood possesses but at the same time courts use them to positively answer the question of legal personhood for an unresolved case. This is the issue of circularity in legal personhood.

The circularity problem may be, in part, explained if one looks at the legal development of artificial entities and particularly corporations as an example of increased pragmatism on the part of the courts. That is, corporations already presented a socio-legal phenomenon with existing legal effects. With few and incoherent statutory mandates on how to treat these entities, courts have had to find a way to normalize corporations as a legal phenomenon. To do so, courts have looked for conditions that these entities already de facto enjoyed and legitimized them. Consider also some of the theories of legal personhood. They, too, reflect, likely unintentionally, this circularity. The fiction theory of legal personhood,[275] or the realist theory,[276] while diametrically opposite to one another, share a very important common foundation: the idea that artificial entities exist and act in certain ways, ways that are commonly covered by the law, before the law actually covers them. The slower and more backward-looking quality of the law as well as of judicial decision making has further reinforced the circularity problem. It is, however, an important flaw to consider when deciding where to place a new category of artificial entities, AI entities, within the legal system.

In the absence of legislation, there is a larger pool of conditions that courts consider in conferring legal personhood on an artificial entity. This variability of conditions raises issues of legal indeterminacy, uncertainty, and potential arbitrariness as, often, courts have looked at many conditions at once without contextual coherence in the conditions they consider across cases. For instance, for cases similar in context, courts at the same jurisdictional level have looked for the conditions of “ability to own property,” “right to contract,” “right to sue and be sued,” and “constitutional rights” on the one hand in an earlier case,[277] and “ability to own property,” “ability to transact,” “right to sue and be sued,” and “perpetuity” on the other in a later case,[278] without explanation as to why certain conditions controlled in one case but not the other. And while there is some overlap, as the data show, there is also significant variation, which in the case of assessing legal personhood for AI entities can prove to be problematic due to the novel and singular challenges it poses for the law.

The indeterminacy of the caselaw reflects the complexity of the legal personhood concept and its many potential dimensions. However, the conditions my data have identified may provide a foundation for understanding the factors behind the presence or absence of legal personhood where statutes are silent. Given the absence of AI entities from any existing statutory provisions and their interpretations, it is important to seriously consider what this empirical analysis reveals about legal personhood before conferring legal personhood on entities other than corporations in light of future litigation stemming out of potential actions undertaken by AI entities.

IV. Conclusion

As AI entities continue to operate at an increasing distance from their developers and owners, they will continue to bring new challenges to legal frameworks for attribution and liability for potential civil or criminal transgressions. The current legal landscape is not prepared to deal with the question of accountability for AI entities until it answers the question of whether these entities can enjoy legal personhood to also bear legal accountability. At the same time, yielding to popular calls that advocate giving legal personhood to AI entities for reasons that may seem normatively attractive but don’t have a clear legal foundation should give us pause. This Article assesses the scope of legal personhood as it relates to AI entities on the basis of a theoretical survey and an empirical study that result in two claims: one descriptive and one prescriptive. The descriptive claim is that the courts’ overall approach to legal personhood has been more disparate than many have assumed, and it does not support legal personhood for AI entities. The prescriptive claim is that the legal basis for personhood that the empirical findings of this Article present prevents courts from conferring legal personhood on AI entities, and should give legislators pause before doing so. If legislators and courts consider the conditions for legal personhood this Article identifies as applied to AI entities, they will recognize the incompatibility between legal personhood and these entities. Without an understanding of this incompatibility, theory, policy, and litigation surrounding AI entities could move in a direction that undermines legal certainty and upsets legal expectation.


  1. See Evan J. Zimmerman, Machine Minds: Frontiers in Legal Personhood 8 (Aug. 29, 2017) (unpublished manuscript), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2563965 [https://perma.cc/PU5F-YTMP].

  2. See, e.g., Alex Davies, Tesla’s Latest Autopilot Death Looks Just Like a Prior Crash, WIRED (May 16, 2019, 5:46 PM), https://www.wired.com/story/teslas-latest-autopilot-death-looks-like-prior-crash/ [https://perma.cc/C4PX-GLUD]; Paul Scharre, Counter-Swarm: A Guide to Defeating Robotic Swarms, War on Rocks (Mar. 31, 2015), https://warontherocks.com/2015/03/counter-swarm-a-guide-to-defeating-robotic-swarms/ [https://perma.cc/M7VH-WYYG]; Bree Burkitt, Self-Driving Uber Fatal Crash: Prosecution May Be Precedent Setting, azcentral (June 22, 2018, 7:43 PM), https://www.azcentral.com/story/news/local/tempe/2018/06/22/self-driving-uber-fatal-crash-prosecution-may-precedent-setting/726652002/ [https://perma.cc/WJ3U-BBFC].

  3. See J.K.C. Kingston, Artificial Intelligence and Legal Liability, in Research and Development in Intelligent Systems XXXIII: Incorporating Applications and Innovations in Intelligent Systems XXIV, at 269, 274 (Max Bramer & Miltos Petridis eds., 2016).

  4. Bert-Jaap Koops et al., Bridging the Accountability Gap: Rights for New Entities in the Information Society?, 11 Minn. J.L. Sci. & Tech. 497, 514 (2010).

  5. Id. at 517.

  6. Curtis E.A. Karnow, Liability for Distributed Artificial Intelligences, 11 Berkeley Tech. L.J. 147, 153–54, 178 (1996).

  7. See Ryan Calo, Robots in American Law 23 (Feb. 24, 2016) (unpublished manuscript), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2737598 [https://perma.cc/V3YP-FET8]; Karnow, supra note 6, at 182.

  8. Fahad Alaieri & André Vellino, Ethical Decision Making in Robots: Autonomy, Trust and Responsibility, in Social Robotics 159, 159–60 (Arvin Agah et al. eds., 2016) (“[N]on-predictability and autonomy may confer a greater degree of responsibility to the machine . . . .”); Peter M. Asaro, A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics, in Robot Ethics: The Ethical and Social Implications of Robotics 169, 169, 179–81, 183 (Patrick Lin et al. eds., 2012); Samir Chopra & Laurence F. White, A Legal Theory for Autonomous Artificial Agents 26–27, 154–56 (2011) (arguing that AI could be given legal personality but that the final decision is a pragmatic one); Luciano Floridi & J.W. Sanders, On the Morality of Artificial Agents, 14 Minds & Machs. 349, 349, 357, 364 (2004); Steven J. Frank, Tort Adjudication and the Emergence of Artificial Intelligence Software, 21 Suffolk U. L. Rev. 623, 626, 628–29, 666 (1987); Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems 21–22 (2015) [hereinafter Hallevy (2015)]; Gabriel Hallevy, Unmanned Vehicles: Subordination to Criminal Law Under the Modern Concept of Criminal Liability, 21 J.L. Info. & Sci., no. 2, 2012, at 200, 207–08 [hereinafter Hallevy (2012)]; see also Christina Mulligan, Revenge Against Robots, 69 S.C. L. Rev. 579, 585–86, 589–90 (2018); Migle Laukyte, Artificial and Autonomous: A Person?, in AISB/IACAP World Congress 2012: Social Computing, Social Cognition, Social Networks and Multiagent Systems 66, 69 (Gordana Dodig-Crnkovic et al. eds., 2012); S.M. Solaiman, Legal Personality of Robots, Corporations, Idols and Chimpanzees: A Quest for Legitimacy, 25 A.I. & L. 155, 165 (2017); Amanda Wurah, We Hold These Truths to Be Self-Evident, That All Robots Are Created Equal, 22 J. Futures Stud., Dec. 2017, at 61, 63, 65, 67–69. See generally Lawrence B. Solum, Essay, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231 (1992).

  9. Alaieri & Vellino, supra note 8, at 166; see also Joanna J. Bryson et al., Of, for, and by the People: The Legal Lacuna of Synthetic Persons, 25 A.I. & L. 273, 277–78 (2017); Arthur Kuflik, Computers in Control: Rational Transfer of Authority or Irresponsible Abdication of Autonomy?, 1 Ethics & Info. Tech. 173, 180 (1999).

  10. See Bryson et al., supra note 9, at 288.

  11. See Mihailis E. Diamantis, The Extended Corporate Mind: When Corporations Use AI to Break the Law, 98 N.C. L. Rev. 893, 906, 925 (2020); Bryson et al., supra note 9, at 283.

  12. See Mark A. Geistfeld, A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation, 105 Calif. L. Rev. 1611, 1629 (2017).

  13. Alaieri & Vellino, supra note 8, at 163; Bernd Carsten Stahl, Responsible Computers? A Case for Ascribing Quasi-Responsibility to Computers Independent of Personhood or Agency, 8 Ethics & Info. Tech. 205, 208–10 (2006).

  14. See Bryson et al., supra note 9, at 278.

  15. See Comm. on Tech, Exec. Off. of the President, Preparing for the Future of Artificial Intelligence 5 (2016).

  16. John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, Stan. (Apr. 3, 1996, 7:48 PM), http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html [https://perma.cc/A5HE-HJFC].

  17. See Comm. on Tech, supra note 15, at 5–6; Paulius Čerka et al., Liability for Damages Caused by Artificial Intelligence, 31 Comput. L. & Sec. Rev. 376, 380 (2015).

  18. See Comm. on Tech., supra note 15, at 6.

  19. Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements, at xiii (2010).

  20. See Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern

    Approach 1–2 (3d ed. 2010).

  21. Id. at 1–2, 155, 336, 727–28, 1048. Others have proposed taxonomies that are based on a similar categories approach to motivations and behavioral tasks. See Frank Chen, AI, Deep Learning, and Machine Learning: A Primer, Andreessen Horowitz (June 10, 2016), http://a16z.com/2016/06/10/ai-deep-learning-machines [https://perma.cc/Y76M-JJZ8]. Frank Chen categorizes “the problem space of AI into five general categories: logical reasoning, knowledge representation, planning and navigation, natural language processing, and perception.” Comm. on Tech., supra note 15, at 7 (citing Chen, supra). Finally, Pedro Domingos associated AI researchers into “five ‘tribes’ based on the methods they use: ‘symbolists’ use logical reasoning based on abstract symbols, ‘connectionists’ build structures inspired by the human brain; ‘evolutionaries’ use methods inspired by Darwinian evolution; ‘Bayesians’ use probabilistic inference; and ‘analogizers’ extrapolate from similar cases seen previously.” Id. (citing Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World 51–53 (2015)).

  22. See Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation 9 (2018); Comm. on Tech., supra note 15, at 7.

  23. See Brundage, supra note 22, at 16.

  24. See Nick Bostrom & Eliezer Yudkowsky, The Ethics of Artificial Intelligence, in The Cambridge Handbook of Artificial Intelligence 315, 318 (Keith Frankish & William M. Ramsey eds., 2014); Comm. on Tech., supra note 15, at 7.

  25. Čerka et al., supra note 17, at 378; see also Phil Simon, Too Big to Ignore: The Business Case for Big Data 89 (2013); Zimmerman, supra note 1, at 7.

  26. Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J.L. & Tech. 889, 898 (2018); see also Mikella Hurley & Julius Adebayo, Credit Scoring in the Era of Big Data, 18 Yale J.L. & Tech. 148, 181 (2016) (“ZestFinance may rely on statistical algorithms to automatically identify the most significant metavariables.”); Harry Surden, Machine Learning and Law, 89 Wash. L. Rev. 87, 93 (2014) (“[M]achine learning algorithms are able to automatically build accurate models of some phenomenon . . . without being explicitly programmed.”).

  27. “Early AI was focused on solving problems with static rules, which were in most cases mathematically defined.” Bathaee, supra note 26, at 898 n.34; see also Ian Goodfellow et al., Deep Learning 1–3 (2016) (“Several artificial intelligence projects have sought to hard-code knowledge about the world in formal languages. A computer can reason automatically about statements in these formal languages using logical inference rules. This is known as the knowledge base approach to artificial intelligence.”).

  28. See Comm. on Tech., supra note 15, at 8.

  29. See Goodfellow et al., supra note 27, at 272–73.

  30. Zimmerman, supra note 1, at 8.

  31. Id. at 9.

  32. Bathaee, supra note 26, at 900; see also Matthew Adam Bruckner, The Promise and Perils of Algorithmic Lenders’ Use of Big Data, 93 Chi.-Kent L. Rev. 3, 16 (2018).

  33. See Comm. on Tech., supra note 15, at 9.

  34. Id.

  35. Peter Stone et al., Artificial Intelligence and Life in 2030, at 9 (2016).

  36. Zimmerman, supra note 1, at 8.

  37. Nadia Banteka, A Network Theory Approach to Global Legislative Action, 50 Seton Hall L. Rev. 339, 371 (2019); Stone et al., supra note 35, at 8–9.

  38. Čerka et al., supra note 17, at 689 & n.33; Zimmerman, supra note 1, at 10.

  39. Zimmerman, supra note 1, at 10.

  40. Stone et al., supra note 35, at 8–9.

  41. Čerka et al., supra note 17, at 689.

  42. Stone et al., supra note 35, at 8–9; see also Lucas Introna & Helen Nissenbaum, Facial Recognition Technology: A Survey of Policy and Implementation Issues 10 (Lancaster Univ. Mgmt. Sch., Working Paper No. 2010/030, 2010).

  43. Stone et al., supra note 35, at 8–9.

  44. Alaieri & Vellino, supra note 8, at 163–65; Asaro, supra note 8, at 170, 179; Paulius Čerka et al., Is It Possible to Grant Legal Personhood to Artificial Intelligence Software Systems?, 33 Comput. L. & Sec. Rev. 685, 686 (2017); see also Samir Chopra & Laurence White, Artificial Agents - Personhood in Law and Philosophy, in 110 ECAI 2004: Proceedings of the 16th European Conference on Artificial Intelligence 635, 635, 637 (Ramon López de Mántaras & Lorenza Saitta eds., 2004); Hallevy (2015), supra note 8, at 12–13, 21–22, 28, 39; Hallevy (2012), supra note 8, at 207–08, 210; Laukyte, supra note 8; Solum, supra note 8, at 1260–64.

  45. See Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts 53 (2013).

  46. Ryan Abbott, Everything Is Obvious, 66 UCLA L. Rev. 2, 25 (2019).

  47. Mireille Hildebrandt, Criminal Liability and ‘Smart’ Environments, in Philosophical Foundations of Criminal Law 507, 514–15 (R.A. Duff & Stuart P. Green eds., 2011); Karnow, supra note 6, at 154.

  48. Bathaee, supra note 26, at 924; Bostrom & Yudkowsky, supra note 24, at 1.

  49. Bathaee, supra note 26, at 891; see also W. Nicholson Price II, Big Data, Patents, and the Future of Medicine, 37 Cardozo L. Rev. 1401, 1404 (2016) (describing the algorithms that analyze health information as “‘black-box’ precisely because the relationships at [their] heart are opaque–not because their developers deliberately hide them, but because either they are too complex to understand, or they are the product of non-transparent algorithms that never tell the scientists, ‘this is what we found.’ Opacity is not desirable, but is rather a necessary byproduct of the development process.” (footnote omitted)); Will Knight, The Dark Secret at the Heart of AI, MIT Tech. Rev. (Apr. 11, 2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [https://perma.cc/42X2-W6VF].

  50. Bruckner, supra note 32, at 16.

  51. Ryan Abbott & Alex Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 53 U.C. Davis L. Rev. 323, 331 (2019).

  52. Bathaee, supra note 26, at 907.

  53. See Hannah R. Sullivan & Scott J. Schweikart, Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?, 21 AMA J. Ethics 160, 160–61 (2019).

  54. See Abbott & Sarch, supra note 51.

  55. Id.

  56. Id. at 332; Bostrom & Yudkowsky, supra note 24, at 2.

  57. See Asaro, supra note 8, at 175; Diamantis, supra note 11, at 925; Geistfeld, supra note 12, at 1628; Hallevy (2015), supra note 8, at 21; Hallevy (2012), supra note 8, at 201; Bertram F. Malle, Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots, 18 Ethics & Info. Tech. 243, 252 (2016); Solum, supra note 8, at 1244.

  58. See Asaro, supra note 8; Malle, supra note 57; Diamantis, supra note 11; Geistfeld, supra note 12; Deborah G. Johnson, Technology with No Human Responsibility?, 127 J. Bus. Ethics 707, 708 (2015).

  59. See Johnson, supra note 58; Solum, supra note 8, at 1244–45.

  60. See Johnson, supra note 58, at 708, 713.

  61. See Hildebrandt, supra note 47, at 510.

  62. Chopra & White, supra note 8, at 27.

  63. See Koops et al., supra note 4, at 516.

  64. See Lawrence B. Solum, Legal Theory Lexicon 027: Persons and Personhood, Legal Theory Lexicon (Oct. 6, 2019), https://lsolum.typepad.com/legal_theory_lexicon/2004/03/legal_theory_le_2.html [https://perma.cc/YFV8-QHEU]; John Chipman Gray, The Nature and Sources of the Law 27 (1909); Bryant Smith, Legal Personality, 37 Yale L.J. 283, 283 (1928).

  65. Hildebrandt, supra note 47, at 510; Solum, supra note 64.

  66. Solum, supra note 64.

  67. See, e.g., Planned Parenthood of Se. Pa. v. Casey, 505 U.S. 833, 874–79 (1992) (discussing a pregnant woman’s legal interests are controlling until a fetus is viable outside the womb, indicating before a fetus is viable outside the womb it is not treated as legal person); Morgan v. Kroupa, 702 A.2d 630, 633 (Vt. 1997) (stating that pets occupy “a special place somewhere in between a person and a piece of personal property” (quoting Corso v. Crawford Dog & Cat Hosp., Inc., 415 N.Y.S.2d 182, 183 (1979)); Nonhuman Rts. Project ex rel. Tommy v. Lavery, 54 N.Y.S.3d 392, 395 (N.Y. App. Div. 2017) (explaining that, among other factors, because chimpanzees cannot be held legally accountable for their actions, it would be inappropriate to confer legal rights upon chimpanzees); Stephan C. Hicks, Law, Policy and Personhood in the Context of t