I. Introduction and Roadmap
On April 10, 2023, a New York Times headline read: “A.I. is Coming for Lawyers, Again.”[1] In the article, a Times technology reporter described how Artificial Intelligence (AI) has made lawyers’ jobs more efficient and productive.[2] However, with the creation of ChatGPT and other systems using large language models, the legal profession is facing a new attack.[3] This is in large part due to the ability of the new technology to predict, analyze, and produce writing—the “bread and butter”—of a lawyer’s work.[4] Everyone from paralegals to highly paid partners will be required to consider how to make themselves competitive and stay ahead of the advances in technology.[5] In other words, instead of focusing on work that can be done by AI, a lawyer’s job will focus on industry expertise, client relationships, and insightful guidance.[6]
Since the advancement of AI, lawyers and legal professionals have demonstrated the applications and challenges of using it. For example, the systems advance clients’ interests in a more cost-friendly and efficient manner.[7] However, current cases and scholarship involving AI show that there is a human element that AI cannot replace.[8] State bars are advising lawyers to proceed with caution because “a lawyer’s ethical obligations have not changed.”[9] Practitioners must uphold the requirements of ethical rules and responsibilities while using AI systems to augment their work. In doing so, consideration of the ramifications and consequences of using these systems is paramount.
This Comment begins by providing a working definition of AI. It briefly considers the capabilities of the technology in general before discussing the roles and implications it has in the legal field. Accordingly, Part III provides an understanding of the current ways the legal field is using AI, and Part IV provides a larger discussion of the Model Rules of Professional Responsibility (Rules). Part V discusses the ethical implications of relying on AI to augment work by applying the current Rules to a hypothetical. Part VI introduces the current international trends in AI regulations, describes how other sectors are overcoming ethical hurdles, and provides a roadmap to regulation that is meant to serve as a guide moving forward.
II. Defining Artificial Intelligence
The amount of systems that use AI has greatly increased in the past decade.[10] People use it to simplify common tasks, for example, using voice search or text.[11] Nonetheless, the majority of consumers worry about businesses using AI to augment work.[12] Within the legal community, proponents of using AI argue that it can generate initial drafts referencing pertinent case law, present arguments, and even predict opposing counsel’s arguments.[13] This innovation will lead to new legal tech startups using the systems to answer questions typically addressed by a junior associate.[14] Critics, however, question what this will mean for Big Law and other established law firms.[15]
Before these issues can be discussed, it is important to understand what AI is and why it can replace jobs that rely on written product and academically trained lawyers. This part provides an overview of AI by discussing what constitutes an AI system and introduces relevant technical terms. It ends by discussing generative AI and its implications.
AI is a broad term that encompasses various pieces of technology that mimic humanlike behavior.[16] According to The National Institute of Standards and Technology (NIST), systems that use AI are adaptive.[17] They can solve problems, make predictions, and complete tasks that require cognition and other human-like senses.[18] AI systems work with machine learning (ML) and data analytics to make intelligent decisions and form conclusions using real-time data.[19] However, in order to make intelligent decisions, computer programmers must build intelligent algorithms.[20]
ML and natural language processing (NLP) are subfields of AI.[21] ML systems teach algorithms to interpret new data without relying on rules-based programming, and are broken down into deep learning, supervised learning, and unsupervised learning.[22] NLP systems can extract content, classify information, machine translate, answer questions, and generate text.[23]
GPT is an acronym for “generative pre-training transformer.”[24] The GPT system is an autoregressive language processing model.[25] Like the NLPs’ ability to generate text, autoregressive language processing models read data that precedes current data to predict what will come next.[26] According to Sam Altman, CEO of OpenAI, AI models are trained using data that is publicly available, licensed content, and content generated by human reviewers.[27]
There are two types of generative AI models: models that can generate image and text, and models that generate just text.[28] The models that generate text “can be used to organize, summarize, or generate new text.”[29] These models “‘understand’ user queries and instructions, then generate plausible responses based on those queries.”[30] Altman went on to explain that “[t]he models generate responses by predicting the next likely word in response to the user’s request, and then continuing to predict each subsequent word after that.”[31] Here lies the threat for the legal profession. With the accessibility of the technology, lawyers rely on the technology for document drafting, research assistance, contract review, and legal writing assistance.[32]
In March of 2023, OpenAI released GPT-4, which is “more creative, more collaborative, and more accurate” than any other OpenAI system.[33] When compared to previous models, GPT-4’s ability to follow user intent greatly improved.[34] The responses from GPT-4 were preferred on 70.2% of prompts when compared to GPT-3.5.[35] The capabilities of GPT-4 were tested by having the model take exams that were originally designed for humans.[36] For example, GPT-4 completed the Uniform Bar Exam, LSAT, GRE, and various AP tests.[37] GPT-4 exhibited “human-level performance” on these exams and scored in the top 10% of Uniform Bar Exam test takers.[38]
In his article, David Becerra offers the following working definition of AI in the legal field: “[T]he theory and development of processes performed by software instead of a legal practitioner, whose outcome is the same as if a legal practitioner had done the work.”[39] This definition considers both the technology and the legal task AI is meant to perform.[40] The comprehensive scope of this definition, therefore, includes the various different pieces of AI and generative AI technology used in the legal field.[41] It will be the definition that underscores the ideas discussed in this Comment.
III. Artificial Intelligence in the Legal Field
Although the use of AI is not new to the legal field, we are living in a time of remarkable growth and development of generative AI systems, for example, ChatGPT.[42] Global economic analysts assert that the legal field will see large effects given the rise of AI.[43] This part analyzes how the legal profession has used AI to supplement work product.
A. Reliance on Artificial Intelligence
The benefits of using AI in the legal profession are clear.[44] Law firms have used systems backed by AI because they help lawyers save time.[45] Instead of working on routine and mundane tasks, lawyers direct their attention to work that requires legal analysis.[46] The systems quickly produce accurate and high-quality work that can be organized in a logical, coherent document.[47]
According to a survey conducted by LexisNexis, 47% of respondents consisting of lawyers, law students, and consumers from the United States, United Kingdom, Canada, and France believe that generative AI will have a “significant or transformative” impact in the legal field.[48] The potential for AI to transform the legal field, particularly by cutting costs, comes at a time when clients are continuing to demand more for less.[49]
Judges are noticing the potential of technology assisted review (TAR), which relies on predictive coding, to reduce court costs and allow for a more “efficient and superior [alternative] to keyword searching.”[50] However, data protection and client confidentiality remain at the forefront of many legal professionals’ minds. Among those surveyed, 88% have concerns about the ethical implications of using generative AI to augment work.[51]
As outlined below, legal research and outcome prediction, document review and generation, and e-discovery are the AI backed tools favored by the legal profession.[52]
1. Legal Research
Legal research engines, like Westlaw and LexisNexis, incorporate elements of AI for legal research.[53] Although it might seem like a recent development, Westlaw has been using components of AI for two decades.[54] Starting in 2003, Westlaw used AI to better understand what researchers were looking for and provide more targeted search results.[55] Similarly, Lexis uses AI to give researchers more accurate answers and analyze briefs.[56] The advances in AI allow lawyers to quickly access cases that will be most helpful to their arguments.[57]
2. Document Review and Generation
Young associates no longer need to spend hours reviewing large amounts of data and documents.[58] AI can quickly “aggregate data and match a finite set of outcomes to the answers to questions.”[59] For example, ML software systems, such as Kira, are able to identify and analyze information in contracts and documents.[60] Law firms use Kira to assist with due diligence because it can pull clauses and provisions found in contracts and documents and then organize the information in different graphs, tables, and reports.[61] In addition to Kira, systems like Litera Check use ML to proofread multiple documents and streamline the document review process.[62]
In addition to document review, young associates can use generative AI systems, such as ChatGPT, to compose initial drafts of contracts and motions.[63] Experts believe that using generative AI systems will reduce the amount of time worked from ten hours to fifteen minutes and increase overall productivity.[64]
3. E-Discovery
E-discovery is “[t]he process of identifying, preserving, collecting, preparing, reviewing, and producing electronically stored information.”[65] E-discovery can also be referred to as TAR.[66] AI technology allowed e-discovery systems, which historically focused on searching a database using keywords, to move to predictive coding.[67] The switch to predictive coding provides more accurate results as to whether a document will be relevant.[68] The legal community has welcomed the use of this tool.[69] However, some legal commentators have warned about the costs that come with the benefits of using it, and ethical questions regarding e-discovery’s reliance on predicative coding are left unanswered.[70]
B. Recent Legal Developments
Recent legal developments are evidence of the risks and ethical implications of using AI in the legal practice. Steven A. Schwartz, an attorney representing a client in a federal court in New York, used a ChatGPT search to find six similar cases with rulings that favored his client’s argument.[71] ChatGPT verified the cases’ authenticity, and then Schwartz used the cases in the brief filed with the court.[72] However, after the opposing side informed the judge that they were unable to find the cases, Schwartz was sanctioned by the court.[73] In response to lawyers like Schwartz using AI to create court documents, judges at the federal level have required lawyers to disclose their use of AI.[74]
DoNotPay provides legal chatbot services by using ChatGPT and DaVinci programming, creating a robot lawyer tool.[75] The company offered to pay any lawyer with a case in front of the U.S. Supreme Court $1 million to allow the robot to provide AI-generated arguments for the case.[76] DoNotPay is testing its technology by assisting people dealing with expensive medical bills, unwanted subscriptions, and issues with credit reporting agencies.[77] However, the question of ethical implications is already causing controversy.[78]
Current applications of AI show that the systems are shaping the landscape of legal work product with innovative applications and promising advancements. The next part explores the rules that are applicable to the usage of the systems.
IV. Model Rules of Professional Responsibility
The current ABA Model Rules of Professional Responsibility impose obligations on lawyers who use AI in their practice.[79] The Rules lay a foundation that encompasses the valuable notions of what it means to be an ethical practitioner, and they were intended to adapt to the changing legal profession.[80] Therefore, they can serve as a signal of how to govern the novel uses of AI in the legal field. This part summarizes the relevant Rules in relation to (1) competency; (2) confidentiality; (3) supervisory roles; and (4) unauthorized practice of law before exploring how they can work to mandate ethical use of AI by applying them to a hypothetical in the next part.
A. Model Rule 1.1: Competence
“Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”[81] The commentary to Rule 1.1 stresses that lawyers must use thoroughness and preparation that are proportional to the task at hand.[82] Comment 5 states:
Competent handling of a particular matter includes inquiry into and analysis of the factual and legal elements of the problem, and use of methods and procedures meeting the standards of competent practitioners. . . . The required attention and preparation are determined in part by what is at stake; major litigation and complex transactions ordinarily require more extensive treatment than matters of lesser complexity and consequence.[83]
Comment 8 expands on competence and technological advancements by describing what practicing attorneys must do to stay current with the changes: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law . . . including the benefits and risks associated with relevant technology, [and] engage in continuing study and education . . . .”[84]
Staying competent means staying current with the benefits and risks of the AI software and services that they are using.[85] Thus, if lawyers do not feel like they can competently handle a matter without using AI to augment their work they need to: “(1) turn down the matter; (2) spend whatever time it takes to acquire the necessary legal knowledge and technological skill; or (3) associate with a different lawyer . . . who already has the necessary technological knowledge and skill.”[86]
B. Model Rule 1.6: Confidentiality
Model Rule 1.6 titled “Confidentiality of Information” requires attorneys to keep client information confidential unless given explicit permission to release it.[87] Additionally, lawyers must ensure that confidential client information will not mistakenly be released or obtained by a third party.[88] Rule 1.6 mandates that “[a] lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent [or] the disclosure is impliedly authorized in order to carry out the representation.”[89] It further states that “[a] lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[90]
C. Model Rule 5.1: Supervision
Model Rule 5.1 titled “Responsibilities of Partners, Managers, and Supervisory Lawyers” establishes that partners of a firm or lawyers who hold direct supervisory authority over other lawyers have a duty to ensure that the other lawyers uphold the Rules.[91] Additionally, law firm partners and individuals who hold managerial authority in the firm must make reasonable efforts to guarantee that there are systems in place that ensure that lawyers are complying with the Rules.[92] This includes designing internal policies and procedures that ensure lawyers are supervised and trained to remain competent about ethical problems in the legal field.[93]
D. Model Rule 5.5(b): Unauthorized Practice of Law
Model Rule 5.5 titled “Unauthorized Practice of Law” mandates that a lawyer is trained and licensed to practice law.[94] The rule states that “[a] lawyer who is not admitted to practice in this jurisdiction shall not: (1) . . . establish an office . . . in this jurisdiction for the practice of law; or (2) hold out to the public . . . that the lawyer is admitted to practice law in this jurisdiction.”[95]
Lawyers or firms who use AI-powered systems must ensure that licensed attorneys, and not the human-created AI systems, are giving advice and counseling clients.[96] This suggests, at the very least, that it is imperative that lawyers review the work generated by AI prior to relying on it.
V. Artificial Intelligence Hypothetical
The following part explores how the current Rules apply to real world situations. It starts with a hypothetical and then walks through the relevant rules and applies them to the scenario.
A first-year associate recently started working with a law firm to help with a multimillion-dollar transaction. The client told the firm to keep costs at a minimum and eliminate any unnecessary billable hours. The firm received the client’s document production consisting of over 30,000 documents for review, including e-mails, text messages, contracts, and handwritten letters. The supervising attorney who just received the documents goes to a first-year associate’s office and suggests that they use a new system to expedite review. The first-year associate recently heard some other attorneys talking about the system and learned that it uses ML and algorithms to analyze and pull out relevant documents, clauses, and red flags. The system is licensed to the firm, but since starting in early August, the first-year associate has not attended training to learn how to proficiently use it. However, using the system will cut the associate’s due diligence time down by at least 30%. The associate decides to try it. Once the documents are uploaded to the database, the system generated multiple reports that highlight its findings. While using the system, the associate learns that it will also help write initial drafts and due diligence memos. The associate uploaded the documents and briefly skimmed the generated reports. The associate used the system to draft the memo that was subsequently turned into the supervising attorney.
A. Competency
Model Rule 1.1 suggests that attorneys should only use technology that they are competent to use.[97] Prior to implementing the above technology within the firm, the supervising attorney and first-year associate are required to take steps to learn about how the platform operates. It is unlikely that overhearing a conversation between attorneys about the system constitute steps taken to ensure competency.
In 2015, the Standing Committee on Professional Responsibility and Conduct of the State Bar of California issued an opinion relating to the ethical duties that an attorney must meet when handling discovery of electronically stored information.[98] The Committee considered whether the duty of competence would be violated by failing to consult an e-discovery expert when it was obvious that the trial he was working on would use the technology.[99] The Committee opined that:
[T]he duty of competence requires an attorney to assess his or her own e-discovery skills and resources as part of the attorney’s duty to provide the client with competent representation. If an attorney lacks such skills and/or resources, the attorney must try to acquire sufficient learning and skill, or associate or consult with someone with expertise to assist.[100]
Similarly, here, it would be beneficial for the firm to consult with an AI expert to fully understand how the system operates and how this impacts the final work product. It is a lawyer’s responsibility to be “abreast” with the “benefits and risks” associated with using it.[101] The attorney, at the very least, should understand how the system is reviewing the documents. It is more likely, however, that the attorney should look over the work and double-check the product.
In addition, the first-year associate only skimmed the reports generated by the system before using the system to draft the initial memo. Comment 5 to Rule 1.1 requires lawyers using AI to draft a document or run reports to use enough preparation and thoroughness to match the complexity and consequence of the matter at hand.[102] Instead of putting all his trust into the algorithms used to review documents associated with a multimillion-dollar transaction, the associate should have reviewed the reports to look for any missing information or other red flags.
B. Supervision
In the same opinion, the State Bar of California also considered if the attorney violated the duty of competence because they failed to supervise a third party.[103] According to the Committee, there are certain nondelegable duties that belong to attorneys.[104] Although it is within an attorney’s right to consult another lawyer or expert, the attorney is still obligated to oversee the work.[105] It is the attorney “who remains the one primarily answerable to the court.”[106]
Rule 5.1 suggests that supervising attorneys must ensure proper supervision of work completed by the attorneys they supervise.[107] When using AI systems, the crucial point is that it is the supervising attorney’s job and ethical responsibility to review the work of the AI system for accuracy.[108] Here, as discussed above, the supervising attorney is required to ensure that the first-year associate’s work both complies with the Rules and that the firm has systems in place to ensure that the Rules are being followed.[109]
C. Confidentiality and Communication
Model Rule 1.6 states that a lawyer cannot reveal confidential information unless they have informed consent.[110] It further states that lawyers must “make reasonable efforts to prevent” mistakenly disclosing information of a client without proper authorization.[111] Here, the law firm would need to consider where the documents are uploaded to and whether it is safe. Further, Model Rule 1.6 suggests that prior to using AI platforms, firms and attorneys should understand the risks posed by uploading the documents to a system and whether confidentiality is at risk.[112] Therefore, it would be beneficial to both parties if the client understood that an AI system is being used and the measures that are being taken to ensure confidentiality are not compromised.
D. Unauthorized Practice of Law
Model Rule 5.5(b) suggests that a person must be trained and licensed to practice law.[113] Michael Simon defines the practice of law “as providing advice and counsel regarding legal matters, providing legal representation, and drafting legal documents.”[114] When considering whether the increasing use of automated systems constitutes the practice of law, Simon argues that recent legal decisions point towards automated machines not being governed or threatened by Unauthorized Practice of Law arguments.[115] In the hypothetical above, it is uncertain whether the AI platform would be practicing law by reviewing the documents, and if these platforms are practicing law, it is unclear how strong this argument will stand against AI systems that are able to generate text.[116]
VI. Model Guidelines and Roadmap
This part discusses how other countries regulate AI and how state and federal courts are addressing the need for AI regulation. It ends by offering best practices the legal field could implement by incorporating guidance from the AI Risk Management Playbook from the Department of Energy.
A. International Regulations
Among international players, (1) transparency; (2) traceability and explainability; (3) data protection; and (4) challenges and redress of AI decisions are identified as some of the top principles when it comes to regulating AI.[117]
The European Union is seen as one of the “furthest along in developing a comprehensive legislative response to governing AI.”[118] In an effort to regulate AI, the European Commission proposed a regulatory framework for AI, the Artificial Intelligence Act (AI Act).[119] The AI Act classifies AI systems based on the risk presented to users and uses that risk level to determine the amount of regulation.[120] According to the framework, “AI systems that negatively affect safety or fundamental rights will be considered high risk.”[121] AI systems that assist lawyers “in legal interpretation and application of the law” fall into the high-risk category.[122] If an AI system is marked as high risk, it must be registered in an EU database and will be assessed before entering the market.[123] If it makes it to the market, it will be reassessed “throughout [its] lifecycle.”[124] By assessing and reassessing AI systems that are marked as “high risk,” the AI Act assures that the systems are consistently meeting quality management and process requirements.[125]
Additionally, generative AI would have to adhere to specific transparency requirements.[126] Relevant here are: (1) “[d]isclosing that the content was generated by AI”; and (2) “[p]ublishing summaries of copyrighted data used for training.”[127]
Like the EU, the United Kingdom is taking a risk-based approach to the regulation of AI systems.[128] The United Kingdom regulation focuses on context and proportionality.[129] A focus on context and proportionality allows regulators to analyze the risk in the context and environment the AI system is used.[130]
China recently enacted the Interim Measures for the Management of Generative Artificial Intelligence Services.[131] The Measures hold providers of generative AI services legally responsible for the sources that generative AI systems use and pre-training data.[132] Providers are also liable as the producers of the content the system generates.[133]
B. Artificial Intelligence Regulation in the United States
Unlike the European Union, United Kingdom, or China, the United States does not have a national law to regulate AI.[134] In lieu of national regulation, state courts have started to address the issue of AI in the courtroom.[135] Courts are issuing opinions about the use of AI in the form of Rule 11(b)(2) sanctions, court orders, and local rules.[136]
In Texas, Judge Brantley Starr adopted a policy on the use of generative AI.[137] Judge Starr adopted this policy because:
[T]hese platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth).[138]
Similar to the Texas standing order, an Illinois judge released an order that requires parties who use generative AI to help with legal research or draft documents filed with the court to disclose the tool they used and how they used it.[139] The U.S. Court on International Trade requires disclosing use of any generative AI program used and all portions of the text drafted with the help of generative AI; parties must also certify that using the system did not result in disclosing confidential or proprietary information without the proper authorization.[140]
Some courts, however, have broader orders that are not limited to generative AI. For example, a Pennsylvania judge released an order stating that attorneys must “disclose [whether] AI has been used in any way.”[141] The order requires parties to disclose the use of platforms that help with contract analytics, research tools, and e-discovery.[142]
C. Task Forces Created by State Bars and ABA
Texas, California, and New York are not waiting for official laws to take a stance on the use of AI in the legal field. Instead, they have created task forces that are responsible for helping the legal professionals within their state answer the tricky questions around the ethics of using AI.[143] The American Bar Association (ABA) also created a task force on law and AI.[144]
Prior to creating the task force, the ABA drafted two resolutions that address the use, development, and oversight of AI in the legal sector. Resolution 604, passed in February 2023, focuses on oversight and accountability for consequences caused by AI products.[145] The resolution urges that humans control and oversee the AI systems.[146] If steps are not taken to mitigate legally cognizable injury or harm caused by AI systems, the operator of the system is responsible for the consequences.[147] Finally, operators should “ensure the transparency and traceability of their AI . . . services . . . while protecting associated intellectual property, by documenting key decisions made with regard to the design and risk of the data sets, procedures, and outcomes underlying their AI . . . services.”[148]
Resolution 112 recommends that courts and lawyers recognize the ethical implications of AI including: “(1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”[149] This suggests that lawyers have a duty to be aware of AI and ensure that they are legally advising their clients to the best of their ability while using AI to augment their work.[150]
D. Roadmap
The AI Risk Management Playbook (AI RMP) from the Department of Energy serves as a comprehensive resource for identifying AI-related risks and provides actionable pathways for responsible and trustworthy AI utilization and development.[151]
AI RMP suggests that if ethics governance is lacking, an AI Ethics Review Board, or a similar mechanism, should be formed to discuss overall accountability and ethics practices, as well as other undefined areas.[152] In addition to forming a review board, AI RMP suggests developing external guidance or third-party auditing that could oversee the ethical concerns and accountability measures.[153] AI ethics councils should be put in place as a preventative measure and as a way to create transparency between the systems’ algorithms and the end work product.[154] Independent advice and guidance on ethical considerations in AI development provides a second line of defense against ethical problems around AI to establish an interdisciplinary or transdisciplinary board, drawing advisors from areas such as civil rights, ethics, law, and science.[155]
In addition to ethics governance, AI RMP offers suggestions about data privacy risks.[156] Proposed mitigation risks should help individuals remain in control of their personal data.[157] This would include privacy impact assessments that would ensure proper evaluation of potential privacy risks associated with using AI.[158]
AI RMP also offers a list of suggestions and questions that prompt users of AI to consider privacy, confidentiality, accountability, and supervision. The following questions are adapted from the list to be relevant to the legal industry:
-
Privacy and Confidentiality
a. Did the client give informed consent if their confidential information is used to develop AI systems?[159]
b. Does the AI system have practices that address security risks, such as those related to cybersecurity controls, and third-party data or AI systems that use client data?[160]
c. Does the AI system have technology to detect and notify clients of attacks or potential breaches?[161]
d. Is a program in place that allows friendly hackers to assess the AI system for vulnerabilities?[162]
-
Accountability and Supervision
a. Are there training and resources available to help lawyers develop the skills they need to effectively oversee the AI system and understand the potential biases that exist?[163]
b. Are specific lawyers meaningfully accountable for the AI system?[164]
c. Is the output of AI systems being well-documented and traceable, and used in a way that is consistent with informed consent?[165]
d. Can key information about the AI system and its underlying data be traced and understood?[166]
As national regulation is being considered, the legal field needs to develop a framework that allows for an advancement of the benefits of AI, for example, helping firms maintain an economic advantage in the field, while still upholding the ethical requirements the legal field demands. The questions presented by AI RMP provide a starting point for the legal field to consider as it develops regulations that protect the ethics of the practice. The pillars of privacy, confidentiality, accountability, and supervision should serve as a guide.
Regulations in the European Union and China can also serve as guidance for the legal field. Like the legal responsibility for providers as defined by China’s Measures for the Management of Generative Artificial Intelligence Services, law firms and attorneys should be seen as providers that use AI to provide legal services.[167] Increased legal liability could lead to more effective tools for data.[168] However, this could also lead to narrowing of innovation.[169] The legal field could benefit from a risk-based approach similar to the EU and UK models. Finally, a requirement to disclose to the client the confidential data used for training would put the client on notice and give them an informed understanding of how their data was used.
VII. Conclusion
The use of AI in the legal field has clear benefits. However, it is also clear that the use of AI raises important ethical questions. This Comment provided an overview of AI in the legal field, the current Rules, and the ethical implications posed by innovative AI systems. It ended by considering the integration of privacy, confidentiality, accountability, and supervision into regulations. The regulations must find a balance between lawyers, rather than AI, having the final stamp of approval on work product and the benefits that the technology can bring to the profession.
Andrea Bucher
Steve Lohr, A.I. Is Coming for Lawyers, Again, N.Y. Times (Apr. 10, 2023), https://www.nytimes.com/2023/04/10/technology/ai-is-coming-for-lawyers-again.html [https://perma.cc/3FDN-H5PY].
Id.
Andrew Perlman, The Implications of ChatGPT for Legal Services and Society, Ctr. on Legal Pro.: Harv. L. Sch. (Mar./Apr. 2023), https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/ [https://perma.cc/GN82-DT7D] (“The disruptions from AI’s rapid development are no longer in the distant future.”).
Id.
Id.
Id.; see also Sergio David Becerra, Comment, The Rise of Artificial Intelligence in the Legal Field: Where We Are and Where We Are Going, 11 J. Bus., Entrepreneurship & L. 27, 49 (2018).
7 Ways Artificial Intelligence Can Benefit Your Law Firm, Am. Bar Ass’n (Sept. 2017), https://www.americanbar.org/news/abanews/publications/youraba/2017/september-2017/7-ways-artificial-intelligence-can-benefit-your-law-firm [https://perma.cc/K8CT-HCTK].
Becerra, supra note 6, at 49 (“AI is not likely to replace lawyers in procedural aspects of legal practice or in legal research. Instead, AI will be used to complete remedial tasks via automation—allowing lawyers to focus on the more detailed and high-level work of analysis.”); see also infra Section IV.B.
Jonathan Grabb, Lawyers and AI: How Lawyers’ Use of Artificial Intelligence Could Implicate the Rules of Professional Conduct, Fla. Bar (Mar. 13, 2023), https://www.floridabar.org/the-florida-bar-news/lawyers-and-ai-how-lawyers-use-of-artificial-intelligence-could-implicate-the-rules-of-professional-conduct/ [https://perma.cc/KW2H-UNS2]; see also Att’y Professionalism F., Using AI in Your Practice? Proceed With Caution, N.Y. State Bar Ass’n (Aug. 7, 2023), https://nysba.org/using-ai-in-your-practice-proceed-with-caution/ [https://perma.cc/4EL4-F4TN].
Katherine Haan, 22 Top AI Statistics and Trends, Forbes (Oct. 16, 2024, 10:40 AM), https://www.forbes.com/advisor/business/ai-statistics/ [https://perma.cc/Q9YM-9LD3].
Id.
Id.
John Villasenor, How AI Will Revolutionize the Practice of Law, Brookings (Mar. 20, 2023), https://www.brookings.edu/articles/how-ai-will-revolutionize-the-practice-of-law/ [https://perma.cc/4TR8-JWSD].
Id.
Becerra, supra note 6, at 32.
AI for Legal Professionals, Bloomberg L., https://pro.bloomberglaw.com/brief/ai-in-legal-practice-explained/ [https://perma.cc/JGU6-3Q6C] (last visited Mar. 3, 2025).
Nat’l Inst. of Standards & Tech., U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools 7–8 (2019), https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf [https://perma.cc/AK8W-BWBM].
Id. at 7–8.
Id.
Id. at 8; see also Pa. Bar Ass’n & Phila. Bar Ass’n, Ethical Issues Regarding the Use of Artificial Intelligence 5 (2024), https://www.pabar.org/Members/catalogs/Ethics Opinions/Formal/Joint Formal Opinion 2024-200.pdf [https://perma.cc/WJK2-SA7Z] (“An example of AI bias in legal applications can be found in the predictive algorithms for risk assessment in criminal justice systems. If the algorithm disproportionately flags individuals from marginalized communities as high-risk, it could lead to unjust outcomes such as harsher sentences, perpetuating systemic biases within the legal system.”).
Becerra, supra note 6, at 33.
Laurie A. Harris, Cong. Rsch. Serv., R46795, Artificial Intelligence: Background, Selected Issues, and Policy Considerations 3–4 (2021).
Becerra, supra note 6, at 33 fig.1.
Amy B. Cyphert, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law, 55 U.C. Davis L. Rev. 401, 406 (2021).
Id.
Viktor Mehandzhiyski, What Is an Autoregressive Model?, 365 Data Sci. (Apr. 27, 2023), https://365datascience.com/tutorials/time-series-analysis-tutorials/autoregressive-model/ [https://perma.cc/PSA8-D5TP].
Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Priv., Tech. & the L. of the S. Comm. on the Judiciary, 118th Cong. 6 (2023) [hereinafter Oversight of AI Hearing] (written testimony of Samuel Altman, CEO, OpenAI).
Id.
Id.
Id.
Id.
See infra Section III.A.
Oversight of AI Hearing, supra note 27.
OpenAI et al., GPT-4 Technical Report 7 (2024), https://arxiv.org/pdf/2303.08774 [https://perma.cc/S5CW-Y6ZQ].
Id.
Id. at 4.
Id. at 5 tbl.1.
Id. at 6.
Becerra, supra note 6, at 38.
Id.
Id.
Oversight of AI Hearing, supra note 27.
Jan Hatzius et al., Goldman Sachs, The Potentially Large Effects of Artificial Intelligence on Economic Growth (Briggs/Kodnani) 6 (2023), https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.pdf [https://perma.cc/F99S-VVDG] (“[W]e estimate that one-fourth of current work tasks could be automated by AI in the US, with particularly high exposures in administrative (46%) and legal (44%) professions and low exposures in physically-intensive professions such as construction (6%) and maintenance (4%).” (citation omitted)); see also Michael Simon et al., Lola v. Skadden and the Automation of the Legal Profession, 20 Yale J.L. & Tech. 234, 286 (2018) (“What Lola foretells is that the future for those performing low-level legal tasks is likely to be short-lived. As the advancement of AI brings greater efficiency to the legal profession—and greater rewards to those at the top—those performing those low-level tasks are, simply put, costs to be eliminated.”).
7 Ways Artificial Intelligence Can Benefit Your Law Firm, supra note 7.
Id.
Id.
Id.
LexisNexis International Legal Generative AI Survey Shows Nearly Half of the Legal Profession Believe Generative AI Will Transform the Practice of Law, LexisNexis (Aug. 22, 2023), https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-international-legal-generative-ai-survey-shows-nearly-half-of-the-legal-profession-believe-generative-ai-will-transform-the-practice-of-law [https://perma.cc/5PG5-Q6YK].
Irina Anghel, Law Firms Are Recruiting More AI Experts as Clients Demand ‘More for Less,’ Ins. J. (July 5, 2023), https://www.insurancejournal.com/news/national/2023/07/05/728715.htm [https://perma.cc/TB63-29U2]; see also Daniel Martin Katz, Quantitative Legal Prediction—or—How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry, 62 Emory L.J. 909, 910, 928 (2013) (“Faced with cost pressures, clients and law firms are leveraging legal information technology to either automate or semi-automate tasks previously performed by teams of lawyers.”).
Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114, at *2 (S.D.N.Y. Aug. 1, 2016).
LexisNexis International Legal Generative AI Survey, supra note 48.
Drew Simshaw, Ethical Issues in Robo-Lawyering: The Need for Guidance on Developing and Using Artificial Intelligence in the Practice of Law, 70 Hastings L.J. 173, 192 (2019).
Westlaw Precision with CoCounsel, Thomson Reuters, https://legal.thomsonreuters.com/en/c/westlaw/westlaw-precision-generative-ai [https://perma.cc/TVK8-9982] (last visited Mar. 5, 2025); The Power of Artificial Intelligence in Legal Research, LexisNexis, https://www.lexisnexis.com/community/insights/legal/b/thought-leadership/posts/the-power-of-artificial-intelligence-in-legal-research [https://perma.cc/NT4R-CA3N] (last updated Nov. 6, 2023).
Our AI Timeline, Thomson Reuters, https://www.thomsonreuters.com/en/artificial-intelligence/ai-timeline.html [https://perma.cc/777C-ZEXA] (last visited Jan. 2, 2025).
Id.
The Power of Artificial Intelligence in Legal Research, supra note 53; see also Spot Unfavorable Authority: An Attorney’s Guide to AI-Powered Legal Document Review, LexisNexis (Mar. 16, 2023), https://www.lexisnexis.com/community/insights/legal/b/product-features/posts/an-attorneys-guide-to-ai-powered-legal-document-review [https://perma.cc/VKV7-4WNT] (“Brief Analysis works by using AI to ‘read’ for you, automatically flagging whether the cases that are cited in that passage contain case authority that is favorable or unfavorable to the arguments made in the brief.”).
Simshaw, supra note 52, at 193.
Id. at 192.
Id. (quoting Kelly Phillips Erb, Are We Ready for Robot Lawyers?, 38 Pa. Law. 38, 54, 55 (2016)).
Focus on High-Value Work with More Efficient Contract Review, Kira, https://web.archive.org/web/20250305222430/https://kirasystems.com/solutions/law-firms/ [https://perma.cc/R3SV-257Q] (last visited Nov. 2, 2023).
Id.; Becerra, supra note 6, at 46.
Deliver Perfect Documents in Less Time, Litera, https://www.litera.com/products/litera-check [https://perma.cc/VMR2-AHAD] (last visited Dec. 18, 2024).
Danielle Braff, Some Attorneys Are Using ChatGPT to Help Them Practice More Efficiently, ABA J. (Oct. 1, 2023, 3:30 AM), https://www.abajournal.com/magazine/article/some-attorneys-are-using-chatgpt-to-help-them-practice-more-efficiently [https://perma.cc/QM8C-AM76].
John G. Browning & Christene “Chris” Krupa Downs, The Future Is Now: The Rise of Artificial Intelligence in the Legal Profession, State Bar Tex., https://www.texasbar.com/AM/Template.cfm?Section=articles&Template=/CM/HTMLDisplay.cfm&ContentID=46315 [https://perma.cc/HXN2-SSM2] (last visited Nov. 2, 2023); see also Matt Reynolds, Clients More Optimistic About AI than Legal Professionals, Says Clio’s 2023 Legal Trends Report, ABA J. (Oct. 9, 2023, 2:18 PM), https://www.abajournal.com/web/article/clients-more-optimistic-about-ai-than-legal-professionals-says-clios-2023-legal-trends-report [https://perma.cc/S92M-J4WA] (“[Clio’s 2023 Legal Trends Report] found that there are marked gains in productivity and growth, with [lawyers] working 40% more cases and producing 70% more billable hours . . . . Firm utilization . . . has increased 32%, while realization rates . . . have increased 12%, representing ‘steady growth and improved performance’ . . . .”).
The Sedona Conf., The Sedona Conference Glossary: E-Discovery & Digital Information Management 18 (Sherry B. Harris et al. ed., 3d ed. 2010).
Daniel N. Kluttz & Deirdre K. Mulligan, Automated Decision Support Technologies and the Legal Profession, 34 Berkeley Tech. L.J. 853, 862–63 (2019).
Simshaw, supra note 52, at 192–93.
Id. at 193.
See, e.g., Moore v. Publicis Groupe, 287 F.R.D. 182, 193 (S.D.N.Y. 2012) (“This Opinion appears to be the first in which a Court has approved of the use of computer-assisted review. That does not mean computer-assisted review must be used in all cases, or that the exact ESI protocol approved here will be appropriate in all future cases that utilize computer-assisted review. . . . What the Bar should take away from this Opinion is that computer-assisted review is an available tool and should be seriously considered for use in large-data-volume cases where it may save the producing party (or both parties) significant amounts of legal fees in document review.”).
Kluttz & Mulligan, supra note 66, at 866, 870.
Lawyers, Beware LLMs: Attorney Faces Disciplinary Action for Using ChatGPT’s Fictional Brief, Batch (June 21, 2023), https://www.deeplearning.ai/the-batch/attorney-faces-disciplinary-action-for-using-chatgpts-fictional-brief/ [https://perma.cc/R25J-FL96].
Id.
Id.; Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 464–65 (S.D.N.Y. 2023) (“Schwartz violated Rule 11 . . . because, as he testified at the hearing, when he looked for [the case] he ‘couldn’t find it,’ yet did not reveal this . . . . Poor and sloppy research would merely have been objectively unreasonable. But Mr. Schwartz was aware of facts that alerted him to the high probability that [the cases] did not exist and consciously avoided confirming that fact.”).
Sara Merken, Another US Judge Says Lawyers Must Disclose AI Use, Reuters (June 8, 2023, 5:35 PM), https://www.reuters.com/legal/transactional/another-us-judge-says-lawyers-must-disclose-ai-use-2023-06-08/ [https://perma.cc/D8PE-CK4W]; see also Mandatory Certification Regarding Generative Artificial Intelligence, https://www.txnd.uscourts.gov/sites/default/files/documents/CertReStarrJSR.doc [https://perma.cc/8ZM2-RQBN] (last visited Jan. 26, 2025) (demonstrating a federal judge’s requirement that attorneys certify filings will not be drafted by AI without review by a human being).
Bobby Allyn, A Robot Was Scheduled to Argue in Court, Then Came the Jail Threats, NPR (Jan. 25, 2023, 6:05 PM), https://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats [https://perma.cc/S3AK-PKKH].
Id.
Id.
Id.
See ABA Comm. on Ethics & Pro. Resp., Formal Op. 512 (2024) (discussing generative AI tools and the Rules).
E. Norman Veasey, Ethics 2000 Chair’s Introduction, Am. Bar Ass’n (Aug. 2002), https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/model_rules_of_professional_conduct_preface/ethics_2000_chair_introduction/?login [https://perma.cc/93XU-4LX6].
Model Rules of Pro. Conduct r. 1.1 (Am. Bar Ass’n 1983).
Id. r. 1.1 cmt. 5.
Id.
Id. r. 1.1 cmt. 8.
Roy D. Simon, Artificial Intelligence, Real Ethics, N.Y. St. Bar Ass’n J., Mar.–Apr. 2018, at 34, 34–35.
Id. at 35.
Model Rules of Pro. Conduct r. 1.6 (Am Bar Ass’n 1983).
Id.
Id.
Id.
Id. r. 5.1.
Id.
Id. r. 5.1 cmts. 2–3.
Id. r. 5.5.
Id.
Jennifer Anderson, Top Ethical Issues to Consider Before Embracing AI in Your Law Firm, InfoTrack (Sept. 13, 2023), https://www.infotrack.com/blog/ai-ethical-issues/ [https://perma.cc/U7SL-BB78].
See Model Rules of Pro. Conduct r 1.1 (Am. Bar. Ass’n 1983).
State Bar of Cal. Standing Comm. on Pro. Resp. & Conduct, Formal Op. No. 2015-193, at 3 (2015).
Id.
Id.
Model Rules of Pro. Conduct r 1.1 cmt. 8 (Am. Bar. Ass’n 1983).
Id. r 1.1 cmt. 5.
State Bar of Cal. Standing Comm. on Pro. Resp. & Conduct, Formal Op. No. 2015-193, at 5 (2015).
Id.
Id.
Id.
See Model Rules of Pro. Conduct r. 5.1 (Am. Bar. Ass’n 1983).
See id.
Id.
Id. r. 1.6.
Id.
Id.; see also id. r. 1.0(e) (defining “[i]nformed consent” as “the agreement by a person to a proposed course of conduct after the lawyer has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct”).
See id. r. 5.5.
Simon et al., supra note 43, at 240, 242, 246, 260. In Lola v. Skadden, the plaintiff, a licensed attorney in California argued that he should be given overtime pay for the work he did over 40 hours. Lola maintained that the work he was performing did not constitute the practice of law because it was heavily structured and the system that he relied on used predictive coding to help complete the task. The court held that machines cannot engage in the practice of law. Id. at 240, 242.
Id. at 246. (“[T]he court’s statements suggest that machines can remove tasks from the scope of the ‘practice of law,’ such that machines can encroach on a lawyer’s role in society.”).
Lawyers, Beware LLMs, supra note 71.
Catherine Barrett, Comparing the US, UK, and EU Regulatory Approaches to AI, Am. Bar Ass’n (Aug. 10, 2023), https://www.americanbar.org/groups/science_technology/publications/scitech_lawyer/2023/summer/comparing-us-uk-and-eu-regulatory-approaches-ai/ [https://perma.cc/CNC3-JYPQ].
Id.
EU AI Act: First Regulation on Artificial Intelligence, Eur. Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence [https://perma.cc/3PKU-TEK6] (last updated Feb. 19, 2025, 5:46 PM).
Id.
Id.
Id.
Id.
Id.
Barrett, supra note 117.
EU AI Act: First Regulation on Artificial Intelligence, supra note 119.
Id.
A Pro-Innovation Approach to AI Regulation: Government Response, Dep’t for Sci., Innovation & Tech., https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response [https://perma.cc/G3QJ-GGFY] (last updated Feb. 6, 2024).
Id. para. 62.
Id. para. 62, 64–65 (“[O]ur view is still that it is more effective to focus on how AI is used within a specific context than to regulate specific technologies. This is because the level of risk will be determined by where and how AI is used.”).
China: New Interim Measures to Regulate Generative AI, Baker McKenzie (Aug. 2023), https://insightplus.bakermckenzie.com/bm/attachment_dw.action?attkey=FRbANEucS95NMLRN47z%2BeeOgEFCt8EGQJsWJiCH2WAWuU9AaVDeFglGa5oQkOMGl&nav=FRbANEucS95NMLRN47z%2BeeOgEFCt8EGQbuwypnpZjc4%3D&attdocparam=pB7HEsg%2FZ312Bk8OIuOIH1c%2BY4beLEAezirm3%2BK7wMU%3D&fromContentView=1 [https://perma.cc/256X-T39M].
Seaton Huang et al., Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023, Stan. U.: DigiChina (Apr. 11, 2023), https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/ [https://perma.cc/9M4C-HEVZ].
Id.
Barrett, supra note 117.
Shannon Capone Kirk et al., Judges Guide Attorneys on AI Pitfalls with Standing Orders, Ropes & Gray (Aug. 2, 2023), https://www.ropesgray.com/en/newsroom/alerts/2023/08/judges-guide-attorneys-on-ai-pitfalls-with-standing-orders [https://perma.cc/34LM-RLUY].
Id.; see also Dan Mangan, Judge Sanctions Lawyers for Brief Written by A.I. with Fake Citations, CNBC, https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html [https://perma.cc/ZZ8J-9KSA] (last updated June 22, 2023, 3:53 PM).
Mandatory Certification Regarding Generative Artificial Intelligence, supra note 74.
Jack Greiner, Lawyers Who Use AI Might Not Be All That Intelligent, Cincinnati Enquirer (June 8, 2023, 11:32 AM), https://www.cincinnati.com/story/opinion/2023/06/08/can-lawyers-use-ai-what-happened-to-lawyer-who-used-chatgpt/70300183007/ [https://perma.cc/E6BZ-SXML].
Gabriel A. Fuentes, U.S. Dist. Ct. for N. Dist. Ill., Standing Order for Civil Cases Before Magistrate Judge Fuentes 2 (2023), https://www.ilnd.uscourts.gov/_assets/_documents/_forms/_judges/Fuentes/Standing Order For Civil Cases Before Judge Fuentes rev’d 5-31-23 (002).pdf [https://perma.cc/H9ED-FWFG].
Stephen Alexander Vaden, U.S. Ct. of Int’l Trade, Order on Artificial Intelligence, U.S. Ct. of Int’l Trade 2 (2023), https://www.cit.uscourts.gov/sites/cit/files/Order on Artificial Intelligence.pdf [https://perma.cc/6EEY-RR3X].
Michael M. Baylson, U.S. Dist. Ct. for E. Dist. Pa. Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (2023), https://www.paed.uscourts.gov/sites/paed/files/documents/procedures/Standing Order Re Artificial Intelligence 6.6.pdf [https://perma.cc/Q3FM-DHUD].
Cassandre Coyer, All AI Isn’t Generative AI: Lawyers See Court AI Disclosure Rules as Too Broad, Legal Tech News (June 30, 2023, 2:30 PM), https://www.law.com/legaltechnews/2023/06/30/all-ai-isnt-generative-ai-lawyers-see-court-ai-disclosure-rules-as-too-broad/ [https://perma.cc/J55M-DL9T].
Adolfo Pesquera, ‘We Need to Get Ahead of This’: Texas State Bar Weighs in on Artificial Intelligence, Tex. Law. (July 28, 2023, 5:17 PM), https://www.law.com/texaslawyer/2023/07/28/we-need-to-get-ahead-of-this-texas-state-bar-weighs-in-on-artificial-intelligence/ [https://perma.cc/T26L-BWYX]; see also Cheryl Miller, California State Bar to Craft Guidance on AI in the Legal Profession, Recorder (May 22, 2023, 7:13 PM), https://www.law.com/therecorder/2023/05/22/california-state-bar-to-craft-guidance-on-ai-in-the-legal-profession/ [https://perma.cc/8A2W-V8UZ].
Will You Help to Shape the Future of AI?, Am. Bar Ass’n, https://www.americanbar.org/groups/departments_offices/fund_justice_education/donate/fje-lawai/ [https://perma.cc/U9MQ-3EKZ] (last accessed Jan. 2, 2024).
Am. Bar Ass’n, Resolution 604, at 1–2, 4 (2023), https://www.americanbar.org/content/dam/aba/directories/policy/midyear-2023/604-midyear-2023.pdf [https://perma.cc/2HKN-9TAD].
Id. at 4.
Id.
Id. at 8.
Am. Bar Ass’n, Resolution 112 (2019), https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf [https://perma.cc/JA3G-GZ8Q].
See Model Rules of Pro. Conduct r. 1.4 (Am. Bar Ass’n 1983).
DOE AI Risk Management Playbook (AIRMP), U.S. Dep’t of Energy, A.I. & Tech. Off., https://web.archive.org/web/20240318152252/https://www.energy.gov/ai/doe-ai-risk-management-playbook-airmp [https://perma.cc/U2HM-NL6G] [hereinafter AIRMP] (last visited Jan. 8, 2025).
Id.
Id.
Id.
Id.
Id.
Id.
Id.
See Daniel W. Linna Jr. & Wendy J. Muchman, Ethical Obligations to Protect Client Data when Building Artificial Intelligence Tools: Wigmore Meets AI, Am. Bar Ass’n (Oct. 2, 2020), https://www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/ethical-obligations-protect-client-data-when-building-artificial-intelligence-tools-wigmore-meets-ai/ [https://perma.cc/Q8TE-VET9].
AIRMP, supra note 151.
Id.
Id.
See id.; see also Linna & Muchman, supra note 159.
See Linna & Muchman, supra note 159 (“A lawyer’s duty of competence is nondelegable to a nonlawyer, even when the client employs an expert in any of the processes. Although a lawyer may not delegate the duty of competence, he or she may rely on advisors of established technological competence in the relevant field.” (footnotes omitted)).
Id. (“Informed consent is a fundamental principle of lawyers’ representation of clients. The requirement to obtain it is contained in almost one-third of all ethics rules.”).
Id. (“[I]f the lawyer or law firm and its data scientists are developing models to predict acceptable settlement ranges, the lawyers must satisfy their duty of competence which means the lawyers must understand the model, how the data was obtained and input, and be satisfied that the settlement amount is within an appropriate range.”).
Sean Collin et al., Incorporating AI: A Road Map for Legal and Ethical Compliance, Am. Bar Ass’n (July 10, 2024), https://www.americanbar.org/groups/intellectual_property_law/publications/landslide/2023-24/june-july/incorporating-ai-road-map-legal-ethical-compliance/ [https://perma.cc/RM7Y-P79S] (“The Generative AI Measures impose responsibilities on generative AI service providers to take actions regarding content moderation . . . source training data . . . tag AI-generated content, safeguard user rights and personal information . . . and perform security assessment and algorithm registry filing for generative AI services associated wipublic opinion or social mobilization . . . .”).
See Helen Toner et al., How Will China’s Generative AI Regulations Shape the Future? A DigiChina Forum, Stan. U.: DigiChina (Apr. 19, 2023), https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/ [https://perma.cc/NH7Y-M6CE].
Id.