THROUGH THE AI-LOOKING GLASS AND WHAT CONSUMERS FIND THERE

LETTING TIME SERVE YOU: BOOT CAMPS AND ALTERNATIVE SENTENCING FOR FEMALE OFFENDERS
[1]

By:

Ashley Krenelka Chase* & Sam Harden**

Abstract

While a lack of internet regulation is the norm in the United States, generative artificial intelligence (AI) presents a series of new challenges, particularly in the legal field. Those who are trained in the law know to check their sources, whether they come from case law or a generative AI tool like ChatGPT, but the average consumer is not so discerning. When that average consumer is in the midst of dealing with legal issues and has to navigate those issues without a lawyer, he or she is less likely to sit back and evaluate the information they’re being given, particularly if it looks bright, shiny, and full of knowledge and the ability to help navigate the legal system quickly and efficiently. This lapse in judgment, whether conscious or subconscious, may deepen the justice gap and cause those who are unfamiliar with the legal system to become even more distrustful of not only the system, but the resources that are meant to help self-represented litigants navigate that system in a meaningful way.

Introduction

After fifteen years of marriage, three children, and opening a restaurant together, Sarah and John are divorcing. The divorce is amicable, and they hope to resolve things with a self-drafted marital settlement agreement and parenting plan (though neither of them knows they need both of those documents, or that those are the phrases for what they hope to draft). Sarah sits in front of her computer, opens an internet browser, and searches for “divorce agreement.” She is met with hundreds of thousands of results, but the first catches her eye: “Save Time with AI! Draft Your Legal Agreement Today—No Attorneys Needed!” Sarah is intrigued, navigates to the website, and gets started . . .

Currently, the website described above is an unregulated no-man’s land. With the appropriate disclaimers about legal advice, any company can put a consumer-facing generative AI product on the internet, call it whatever they want, and promise any outputs they think are most marketable to the average internet searcher. Search engine optimization can push sites like this to the top of any list of results, making even the most conspicuous and thoughtful internet user much more likely to click on the link.

While a lack of internet regulation is the norm in the United States, generative artificial intelligence presents a series of new challenges, particularly in the legal field. While those who are trained in the law know to check their sources,[2] whether they come from case law or a generative AI tool like ChatGPT, the average consumer is not so discerning. When that average consumer is in the midst of dealing with legal issues and has to navigate those issues without a lawyer, he or she is less likely to sit back and evaluate the information they’re being given, particularly if it looks bright, shiny, and full of knowledge and the ability to help navigate the legal system quickly and efficiently. This lapse in judgment, whether conscious or subconscious, may deepen the justice gap and cause those who are unfamiliar with the legal system to become even more distrustful of not only the system, but the resources that are meant to help self-represented litigants navigate that system in a meaningful way. This gap could be filled with regulation.

This Article will begin with a brief explanation and analysis of generative artificial intelligence more broadly, as well as its current role in the legal field. It will go on to analyze global regulatory frameworks surrounding artificial intelligence and compare those frameworks to the current approaches in the United States. In Part II, this Article will discuss access to justice in the United States and the ways in which technology currently is and is not filling that gap, as well as the regulations to the industry. Part III will propose a scheme for regulating consumer-facing generative AI products and analyze the potential and pitfalls of regulation. Next, Part IV will discuss enforcement of any consumer-facing generative AI products that may be created to fill the justice gap, while Part V will look on the other side of the looking glass, and discuss predictions based on whether meaningful consumer-facing generative AI reaches those in the justice gap and whether regulating those products becomes a reality.

I.  “Somehow it Seems to Fill My Head with Ideas” [3]: Generative Artificial Intelligence

In 2023, generative AI was a popular topic, grabbing headlines and distracting from other technologies.[4] Generative AI is nothing more than a computer model that uses massive amounts of information to predict what language should come next and, while inspired by the functioning of the human brain, does not have any neural connections of its own.[5] Generative AI is a term that covers many applications that create things like photos and human-like text, and “exemplify . . . [the] remarkable potential of generative AI [to] transform . . . content generation, and human-machine interaction, paving the way for further advances in” things like text generation and even the practice of law.[6]

A.  Definitions and Role Broadly

Generative AI is not new. In fact, the first instances of generative AI emerged in the 1960s.[7] Before generative AI became a mainstay in the consumer marketplace, its impact was being felt across a variety of industries. Part of the reason for lack of adoption across industries was the lack of investment in data. The training of Open AI’s GPT-3 cost more than four million dollars, and large models are expensive to train and run.[8] Additionally, every time new technology (whether AI-related or otherwise) becomes a topic of conversation in popular culture, the fears about robots taking over human jobs run rampant.[9] But the opportunity for users to create innovative usage ideas for AI, as opposed to just technologists, is significant, and some CIOs predict that workforces may use AI to inspire a more self-service,[10] and entrepreneurial area within organizations.[11]

B.  Current Use in the Legal Field

The legal field is, perhaps, the ripest for this entrepreneurial use of generative AI to take hold. Indeed, there have already been reported cases of people seeking legal information and advice from generative AI models. In one instance, a woman in New York documented her use of ChatGPT when she drafted a prompt directing ChatGPT to “act as a housing lawyer” and write a letter to her landlord opposing a rent increase.[12] In cases where individuals are ensconced in vexatious litigation about matters they do not understand, ChatGPT can help understand court legalese and make the process easier to navigate—something that may have been difficult (or embarrassing) before the popularity of generative AI tools.[13]

Several “AI Lawyer” tools have recently been developed using large commercial generative AI models. One AI tool created for South Africa promises “to provide ordinary citizens with easy access to legal knowledge and justice, revolutionising [sic] the way legal services are delivered in South Africa.”[14] Another AI tool, the “AI Lawyer” web application, claims that it is “ready to give you expert legal help anytime, anywhere.”[15] Another AI tool, the Ask AI Lawyer website, offers “a completely free service that utilizes the most advanced artificial intelligence technology to provide you with answers to your legal questions.”[16] One tech firm even attempted to have a “robot lawyer” argue in court, but discontinued the effort after threats of criminal charges.[17]

C.  Global Regulatory Frameworks

The race to regulate AI is not dissimilar from other global technology races: countries are either in, or they’re out.[18] Where the U.S. is notoriously slow to regulate technology,[19] other countries are often (if not always) eager to be at the front of the line. With varying degrees of  success, the European Union and China have taken more straightforward approaches to regulating AI technologies than the United States.

1.  The European Union

The European Union (EU) has had many successes regulating technology. From Net Neutrality to Consumer data protection and privacy, these nations don’t shy away from protecting consumers while still encouraging innovation within the European Union.[20] In 2023, the EU declared that its parliament was preparing the “world’s first set of comprehensive rules to manage the opportunities and threats of AI . . . to turn the EU into a global hub for trustworthy AI.”[21] These opportunities and threats are debated around the world, but the EU has identified the benefits to people and consumers to include “health care, safer cars and other transport systems, tailored, cheaper and longer-lasting products and services . . . facilitate access to information, education, and training . . . make workplace[s] safer . . . and open new job positions.”[22] Conversely, the identified risks include underuse and overuse of the technology: AI poses challenges determining liability, negative impacts on the labor market, and pervasive threats to individuals’ fundamental rights and the functioning of democracy.[23]

Put in those terms, it seems that the threats to consumers and individuals far outweigh the benefits, making regulation even more essential to a society that functions with ever-advancing AI innovations. The initial rules from the EU aimed “to promote the uptake of human-centric and trustworthy AI and protect the health, safety, and fundamental rights and democracy from its harmful effects.”[24]

To meet these ends, the EU Parliament created a list of banned uses of AI it deemed to be discriminatory and intrusive, including real time and post-biometric identification, predictive policing, emotion recognition systems, and untargeted scraping of facial images.[25]

In addition to these outright bans, the EU proposed some obligations for AI identified as general purpose, including risk mitigation, registration, transparency requirements, and safeguards against illegal content.[26] Further, the EU sought to boost AI innovation and support and added exceptions for research activities and AI components provided under open-source licenses.[27] The final outcome of these proposals was the EU’s Artificial Intelligence (AI) Act, adopted on June 14, 2023.[28] The regulations included 771 amendments, and the entirety of the AI Act was then passed on for talks with EU Member Countries to determine the final form of the law, with a goal of having it completed by the end of 2023.[29]

While the impacts of the AI Act will likely be positive for the European Union, its impact will be felt on a global scale.[30] The EU’s propensity to be first-to-regulate and impact the rest of the world is called the “Brussels Effect,” but to what extent the Brussels Effect will be felt with regard to AI remains to be seen.[31] In the past, the Brussels Effect has taken two forms, de facto and de jure.[32] Where the EU regulates only its internal market, and external, multinational corporations are incentivized to standardize their global production to adhere to the EU rules, there is a de facto Brussels Effect.[33] Once the companies adjust their businesses to meet the EU’s standards, they are incentivized to convince their home governments to adopt the same standards in order “to level the playing field against their domestic, non-export-oriented competitors,” creating the de jure Brussels Effect.[34]

Because of its ability to affect global markets, the EU’s regulatory agenda is often driven by entrenched domestic policy preferences that it forces on external markets, thereby making the external market regulation a byproduct of its internal goals, “rather than . . . some conscious effort to engage in ‘regulatory imperialism.’”[35] The EU’s position as the largest economy in the world gives it great success in impacting external market forces, but other countries like China and the U.S. are large enough to similarly use their markets as leverage.[36]

2.  China

While China technically took to regulating AI in advance of the EU, its regulation was not as widely discussed in global markets until after the EU announced the AI Act.[37] China began regulating AI in March of 2022 with its Algorithm Recommendation Regulation, which regulated the use of algorithm recommendation technologies to provide online services in China.[38] In November of 2022, China’s Ministry of Public Security and Ministry of Industry and Information Technology jointly adopted the Deep Synthesis Regulation, which went into force on January 10, 2023.[39] The Deep Synthesis Regulation regulates technologies in China that provide information services to the public, when those technologies “utilize generative and synthetic algorithms, such as deep learning and virtual reality, to generate text, image, audio, video, virtual scenes, and other internet information.”[40] On July 13, 2023, almost exactly one month after the EU’s commission adopted the AI Act, the Cyberspace Administration of China, China’s National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly published the Generative AI Regulation, which went into force on August 15, 2023.[41] The Generative AI Regulation targets a broader scope of generative AI technologies than its regulatory predecessors, and applies to the use of all generative AI technologies to provide services to the public in China, but specifically excludes the development and application of generative AI technologies that have not been used to provide services to the public in China.[42]

The Generative AI Regulation imposes requirements mainly on providers of services that use generative AI, including technical supporters who provide generative AI service technologies through APIs to consumers.[43] The Generative AI Regulation is extensive and imposes obligations on everything from AI service providers to algorithms that recommend products to consumers.[44] There are significant penalties for violating the Generative AI Regulation, some of which are explicitly set out and others which are not.[45] While the Generative AI Regulation is far more expansive and explicit than the EU’s AI Act, it is unlikely that a similar global impact will be felt.

Many global businesses are unable or unwilling to do business in China for a variety of reasons, but the United States could learn from China’s “targeted and iterative approach to AI governance.”[46] China was able to move quickly to pass the Generative AI Regulation because the Algorithm Recommendation Regulation and the Deep Synthesis Regulation were already in existence; the Generative AI Regulation was merely an extension of the previous two regulations.[47] This approach to regulating generative AI is worth noting, particularly in the United States where lawmaking and regulating seem to be at a standstill due to the tumultuous happenings in Washington.

3.  The United States

Because the United States has a haphazard way of legislating in the best of times, rulemaking in fast-moving areas like AI tends to fall to the executive branch—often directly to the President. On October 30, 2023, the Biden Administration issued Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”[48] Executive Order 14110 seeks to “advance and govern the development and use of AI in accordance with eight guiding principles and priorities . . .” including ensuring the safety of AI, responsible innovation practices and development, and requiring privacy for those who use the products, among other things.[49]

The nebulous Executive Order 14110 does nothing to effectuate actual regulation of AI, and while 197 pieces of legislation referencing AI have been introduced in the 2023–2024 legislative session to date, not a single one has been signed into law, and only one—the National Defense Authorization Act for Fiscal Year 2024, which only references AI in passing—has passed both chambers.[50]

Where the federal legislative and executive branches have failed to act in meaningful ways, states have taken up some of the slack. Six states passed AI laws that went into effect in 2023, and most of those laws relate to consumer privacy, allowing users to opt out of profiling and mandating data protection assessments of automated decision-making. Only New York City’s law, titled Automated Employment Decision Tools, regulates any use of AI (requiring annual audits of AI tools used in hiring and allowing job candidates to request data used by AI tools in the hiring process).[51]

New York City’s law requiring audits and disclosure of AI tools used in hiring is an admirable and useful first step to meaningfully regulate AI in the United States, and could have impacts in other jurisdictions in the future. In the short term, however, we are left to wonder what it may look like to regulate consumer-facing AI on a much broader scale.

II.  It Seems Very Pretty . . . but it’s Rather Hard to Understand.”[52] Access to Justice

“Access to justice” does not have a clear definition and is often described with specific populations in mind. More broadly, the idea of access to justice includes procedural and substantive elements that are dependent upon one another.[53] One of the most basic definitions of access to justice is when “a person facing a legal issue has timely and affordable access to the level of legal help they need to get a fair outcome on the merits of their legal issue, and can walk away believing they got a fair shake in the process.”[54] This definition makes it clear that access to justice is possible for any person, navigating any legal issue, in any legal system. But unless people believe the access and outcome they’ve received are fair, access to justice cannot truly be achieved.

A.  Definitions and Existence Broadly

In the United States, access to justice is currently guided by three principles:[55] the first is to promote accessibility by eliminating all barriers that may prevent litigants from understanding and exercising their rights in the American legal system.[56] The second principle seeks to accelerate innovation in legal systems.[57] The goals of a fair legal system are to deliver just outcomes to all parties to litigation, including those who can’t afford counsel or face other disadvantages in navigating through the justice system, whether civil or criminal.[58] The final principle aims safeguard integrity in the system, with a primary goal being to promote “policies and reforms that improve the accountability, fiscal responsibility and integrity of legal systems and process[es].”[59]

Historically, however, there are very few mentions of access to justice in the terms we think about today. Typically, when discussing access to justice or the courts, historical documents reference lawyers being required to serve the poor simply because law practice was, in medieval times, so technical that no person not trained in the law could navigate the rules without representation.[60] But despite references to assisting those who were not trained in the law, there is no way to know how frequently that kind of representation happened.[61] Beginning in 1863, the Working Women’s Protective Union began subsidizing programs to help poor people deal with social and legal problems by helping workers collect fraudulently withheld wages.[62] The idea quickly spread and expanded, and legal aid societies began popping up in the early 20th century.[63] Around the same time lawyers attempted to raise standards within the profession by requiring different educational and bar exam requirements. A Carnegie Foundation Report titled Justice and the Poor was released indicting unequal access to justice making it the leading manifesto for legal aid organizations for the rest of the 20th century.[64]

In 1965, as a part of his war on poverty, President Johnson funded the Office of Equal Opportunity Legal Service Program (since renamed Legal Services Corporation, or LSC) and national bar leaders began supporting the program; the budget quickly grew from $5 million to $489 million in 2022.[65] Despite a national interest in providing assistance to those who couldn’t afford legal counsel, widespread adoption of pro bono hours by practicing attorneys has not been the norm in the United States. “[R]eliable estimates are that, nationwide, American lawyers, on average, perform about half an hour of pro bono work, broadly defined, per year.”[66]

While some blame the complexities of the law—and therefore the justice gap—on lawyers themselves, the highest barriers to access the legal system in the United States are both complexity and cost.[67] It follows that both attorneys and the public who need to access the criminal or civil justice system would like to reduce both, but attorneys have an inherent and protectionist interest in limiting the accessibility of the system.[68] In addition to attorneys’ reluctance to lead by example and make the system more accessible, the judiciary is not keen on opening the justice system up to outsiders, often declaring people to be engaged in the unauthorized practice of law when they are simply living their lives, trying to understand the way the law applies to their lives or the lives of people around them, or trying to innovate to make the law more accessible for others who may not be lucky enough to have a basic understanding.[69]

The current access-to-justice crisis in the U.S. has been well-documented: “On an annual basis, 55 million Americans experience 260 million legal problems. Of those legal problems, . . . 120 million legal problems are not resolved fairly every year.”[70] Only 49% of legal problems are typically resolved.[71] In legal problems that become court cases, the percentage of cases where both sides have legal representation has declined dramatically over the past decades.[72] In 1992, the percentage of cases where both plaintiffs and defendants had legal representation was 95%.[73] In 2015, that percentage had dropped to just 24%.[74] In cases where neither party was represented by an attorney, studies have found that judges rarely offer information about courtroom procedures, and when unrepresented parties ask the judge to explain or clarify things, the judge often refuses to answer, or, in some cases, even criticizes them.[75]

The human element, then, makes it hard for access to justice to be achieved for every person, in every case, every time. Technology has filled the void in other areas of practice where humans have needed assistance achieving the desired outcome,[76] and it’s likely that technology can help to fill the justice gap and provide additional access to the system for those who need it most.

B.  Need for Technology to Fill the Void and the Way That’s Currently Being Done

The impact of these unresolved legal issues can be far-reaching. When surveyed, 45% people reported experiencing negative consequences as a result of their legal problems.[77] Those consequences included things such as: negative impacts on mental health, loss of money, debt, and loss of job/limited ability to work.[78]

There is an obvious medium that can help those in the justice gap: technology. The internet has been the most transformative technology to date in the delivery of legal services across consumers of all income levels.[79] Even before the advent of generative artificial intelligence tools like ChatGPT, Bard, or Lexis AI, attorneys had another enemy lurking around the corner: DoNotPay. DoNotPay began as an application to help individuals get out of parking tickets.[80] It expanded quickly into a larger-scale operation that seeks to get people out of everything from parking tickets to recurring monthly fees they’ve unwittingly agreed to pay while clicking through online contracts.[81] And as quickly as DoNotPay began helping people who didn’t want or need attorneys to handle small-scale issues, practicing attorneys jumped in to argue about the existential threat to their jobs.[82]

The proposed class action against DoNotPay argues that the application is engaged in the unauthorized practice of law, because it claims to be “the world’s first robot lawyer,” but without the benefit of legal training, admittance to the bar, or supervision by a properly-licensed attorney.[83] DoNotPay, the class action complaint alleges, merely relies on “substandard [] legal documents . . . based on information input by customers” and flouts the regulation of lawyers that is the norm in every state in the country.[84] And DoNotPay is not the first “robot lawyer” that has been accused of practicing law.[85] In January of 2018, the Florida Bar filed a petition against TIKD Services, LLC and Christopher Riley, seeking to enjoin them from engaging in the unauthorized practice of law.[86] TIKD, the Florida Bar argued, “practices law” by using an algorithm to examine traffic tickets and determine whether it should provide “services” to the driver who added the information to the application.[87] “If TIKD accepts a ticket, the driver is charged a percentage of the ticket’s face value, and his or her contact information is forwarded to a Florida-licensed attorney whom TIKD has contracted with to provide traffic ticket defense services to its customers.”[88]

The Florida Supreme Court found that this process of analyzing a ticket and referring the ticketholder to a licensed attorney to pursue a potential legal claim constituted the unauthorized practice of law. Interestingly, in the same opinion, the Court seemed to acknowledge the value of a resource like TIKD:

It could be argued . . . that TIKD in some ways increases affordable access to our justice system. However, irrespective of any benefits arguably created by TIKD’s unique, and perhaps temporary, niche, we cannot address the access to justice problem by allowing nonlawyer corporations to engage in conduct that, under this Court’s sound precedent, constitutes the practice of law.

We recognize that advances in technology have allowed for greater access to the legal system . . .[89]

It seems, then, that the judiciary and the practicing bar are accepting of technology until they are not, and a high level of skepticism surrounding generative AI can be expected.

Artificial intelligence has the potential to have an enormous impact on access to justice.[90] But there is currently a great deal of uncertainty around whether the outputs of generative AI could be considered legal advice. The Florida Bar’s committee on generative AI has reportedly discussed “whether legal advice provided by generative AI ‘could be considered the unauthorized practice of law.’”[91] In their advisory opinion, set to be heard, the committee insinuated that a generative AI model could potentially perform acts that constitute the practice of law: “First and foremost, a lawyer may not delegate to generative AI any act that could constitute the practice of law such as the negotiation of claims or any other function that requires a lawyer’s personal judgment and participation.”[92]

Given the Florida Supreme Court’s tendency to see all technology as threatening, it is hard to believe that they won’t, when the time comes, find generative AI to be engaged in the unauthorized practice of law.[93]

On the opposite side of the spectrum, the California Committee on Professional Responsibility and Conduct (COPRAC) released its “Recommendations from Committee on Professional Responsibility and Conduct on Regulation of Use of Generative AI by Licensees.” In these recommendations, COPRAC called for the California Board of Trustees to:

Work with the Legislature and the California Supreme Court to determine whether the unauthorized practice of law should be more clearly defined or articulated through statutory or rule changes; and . . . determine whether legal generative AI products should be licensed or regulated and, if so, how.[94]

It seems that California’s cautious approach to generative AI makes the most sense given the popularity of the platforms and their ability to change the landscape of access to justice. While there are conversations about regulating AI happening throughout the country, few regulatory frameworks are exemplary.

C.  Current Regulatory Frameworks

For their parts, state bars have always had the power to regulate the practice of law, and that regulatory power extends to the regulation of the unlawful practice of law by non-lawyers.[95] Simply disclosing one’s status as a non-lawyer to the public does not permit a non-lawyer to practice law,[96] which often leaves lawyers and non-lawyers alike wondering what, exactly, constitutes the practice of law. The definition of “legal advice” in many states is determined on a case-by-case basis and “ascertaining whether a particular activity falls within [the practice of law] may be a formidable endeavor[97]

Some state bars have attempted to regulate the publication of books under their authority to regulate the “practice of law.”[98] Several decades later, the online legal forms provider LegalZoom has been accused of the unlicensed practice of law by a number of states, including North Carolina, Missouri, and California.[99] Because the regulation of the practice of law and the giving of legal advice is under the authority of the states, it is entirely possible that one state may find that an AI model is giving legal advice, while another state finds that it does not. Further complicating things, generative AI models’ behavior differs not just from model to model, but from time to time even when using the same model.[100] An example of this can be found in Google Bard’s “View Other Drafts” feature, where users can see and rate other draft responses the model created.[101] So, if a state does choose to regulate generative AI, it would need to do so in a way that meaningfully encompasses all of these factors.

To address these challenges, several states have attempted to create language that specifically deals with technology, artificial intelligence, or both. Florida, usually among the first to ring the alarm about issues caused by technology, issued a proposed advisory opinion stating: “[L]awyers using generative AI must take reasonable precautions to protect the confidentiality of client information, develop policies for the reasonable oversight of generative AI use, ensure fees and costs are reasonable, and comply with applicable ethics and advertising regulations.”[102] Prior to the advent of generative AI, but still relevant to the current state of legal technology, Texas specifically carved out an exception for technology, stating:

(c) [T]he “practice of law” does not include the design, creation, publication, distribution, display, or sale, including publication, distribution, display, or sale by means of an Internet web site, of written materials, books, forms, computer software, or similar products if the products clearly and conspicuously state that the products are not a substitute for the advice of an attorney.[103]

In January of 2024, North Carolina published its Proposed Ethics Opinion on the Use of Artificial Intelligence in Law Practice, which discussed professional responsibility issues arising when using AI in the legal profession.[104] North Carolina’s approach sought to answer questions including: whether a lawyer can be permitted to use AI; whether a lawyer can put a client’s data into a third-party AI program; whether a lawyer has to disclose use of AI to clients; and how a lawyer may bill for time spent using AI, considering the savings generated by the AI tool.[105] North Carolina’s approach seems to ask the right questions about how generative AI is used in practice, but leans toward the trend of anthropomorphizing AI tools as “non-lawyers” that must be supervised, like in Florida.[106] It’s clear that the people or organizations who seek to monitor or regulate generative AI don’t really understand what AI is or is not, and the line between what may or may not be considered legal advice is very fuzzy.

California took another approach, acknowledging that the state’s Rules of Professional Conduct did not expressly address the use of generative AI, which created significant uncertainty about the ethical duties for attorneys who might seek to use those resources.[107] In recognizing that the technology will likely change quickly, California issued “Practical Guidance” based on MIT’s Task Force on Responsible Use of Generative AI for Law, which seeks to remind lawyers of their existing professional responsibility obligations and to apply those obligations to any new technology created to assist lawyers.[108]

California’s COPRAC explicitly stated an intention to study generative AI and make recommendations to the Board regarding: balancing rules for the use of AI to protect clients and the public; supervising on non-human assistance; and determining whether attorney competency should extend to the AI product and whether AI use needs to be communicated to clients.[109]

The concerns and potential recommendations from the Board in California echo concerns that are being heard around the United States: what are these robots, can they practice law, and how do we let people know what’s going on?

Liability is also a potential issue. A generative AI provider could face criminal liability for the unauthorized practice of law, as well as civil liability from a user getting “bad advice.”[110] With the need to fill the justice gap so great, and the potential of generative AI to be an effective tool to help self-represented litigants pave their own way through the criminal and civil legal systems, users need and deserve clarity around whether the outputs of generative AI tools are legal advice. Regulation could provide this clarity and illuminate what consumers can and cannot expect when they encounter generative AI tools that, presumably, seek to provide additional opportunities for access to the justice system.

III.  What Could be Seen . . . Was Quite Common and Uninteresting, But All the Rest Was as Different as Possible.”[111] A Proposed Scheme for Regulation

Regulation is never easy and has grown increasingly unpopular in the United States.[112] While there are potential risks for some people to receive bad legal advice because of the implementation of consumer-facing AI to help fill the justice gap, the potential benefits far outweigh them. The Hague Institute for Innovation of Law conducted a study, and “[t]hrough interviews with innovators and those working within the justice institutions, [they] observe[d] a growing awareness that technology presents risks. The benefits that digital tools bring, however, far outweigh the risks—especially in providing access to justice in low and lower middle income countries.”[113]

But with the opportunities and risks associated with using generative AI to increase access to justice so great, regulation of consumer-facing platforms is the best way to ensure that those who need access to the justice system receive exactly what they need and nothing they don’t, with transparency along the way. In regulating consumer-facing AI applications for those who need assistance with the justice system, two goals must be centered: (1) the public must be protected from bad and negligent actors; and (2) the public must be able to access affordable and effective legal help through generative AI models.

To accomplish these goals, it is necessary to remove uncertainty around the question of whether a company offering a “legal AI model” could be liable for their model’s legal advice. To solve this problem, this framework suggests that if the providers of public-facing legal AI tools can meet the proposed requirements, they will be entitled to two legal presumptions:

(1) a liability presumption that their products meet the prevailing standard of care[114], and

(2) a statement by state and local bars and any other authoritative body that the AI tool cannot be found to “practice law” by giving legal advice.

This regulatory scheme is incentive-based. An AI company or developer would not be legally required to comply to offer a product or service. Rather, compliance with the regulations will offer them a shield from potential liability.

As with any regulatory framework, it is important to start with requirements to ensure the needs of both the regulatory body and the entity being regulated are met. To regulate consumer-facing AI, the following requirements are proposed:

Disclosure, upon request, of any built-in prompting or instructions that are sent to the AI model along with the user’s input.

In generative AI applications, typically the user’s input is sent to the model alongside special instructions, such as “You are a helpful researcher, please answer this question:” followed by the user’s input.[115] Such instructions are typically used to increase the model’s effectiveness and the quality of its response; however, they can also be used to manipulate the response in certain ways which may be detrimental to the user. For example, a generative AI tool marketed as a “mental health chatbot” could be instructed behind the scenes to recommend a certain medication.[116]

A.  Disclosure of What Third-Party Generative AI Model, Large Language Model, and/or Application Programming Interface the Product is Using

While there are businesses out there that may have the financial, technological, and personnel resources necessary to produce a home-grown generative AI product that can be used in a consumer-facing legal application, many who seek to enter this space may wish to do so using an existing third-party generative AI model. An example of this language might read as simply as: “This product is using the GPT-4 model by OpenAI”

B.  Disclaimer

A prominent disclaimer that includes the following information:

o   Hallucinations are possible

o   If a person is seeking legal advice, or experiencing a legal problem, they should consult with an attorney.

Hallucinations are misleading or incorrect information produced by a generative AI product when responding to a user-created prompt.[117] Hallucinations are probable—if not likely—when using generative AI for legal applications. While attorneys using these products are aware of (and often indifferent to) the risks, consumers may not be so aware. A prominent disclaimer explaining not only what hallucinations are, but also that they are possible, will be important to building trust with consumers and making a product successful.

A disclaimer about legal advice is often required when using any web-based application seeking to aid those in legal trouble, whether AI-framed or not. By clearly stating that those seeking legal advice should consult with an attorney, it will be clearer that any information provided by an AI tool is a starting point, not an ending point, for dealing with the justice system.

C.  Data Deletion Policy

Most data deletion policies operate consistently and effectively. At a minimum, a successful data deletion scheme for generative AI in consumer-facing legal applications would provide:

o   An option for the user to select “delete my data after use,” and the inputs will be deleted, along with responses from the system.

o   A statement that the system cannot use user data for future refinement, training, or Q&A purposes without consent.

o   An option for the user to select “I agree to let this organization use my anonymized data for future refinement, training, and quality assurance.”

Allowing users a level of transparency regarding how their data is stored and used will go a long way to building confidence in the use of generative AI for all applications, but particularly those in the legal field.

D.  Q&A Process & Expert Review

Testing is a core part of industry-standard “responsible AI practices.”[118] In cases where consumer-facing products provide users with question-and-answer-type resources, it is important to use a defined set of inputs to ensure the information provided remains consistent, trustworthy, and verifiable. The Legal Innovation & Technology Lab at Suffolk Law School has created Spot, an issue-spotting tool that creates standard language for discussing client needs.[119] Spot is used with computer programs to automate issue identification and make access to the justice system more accessible to those who may not have an understanding of what it is they need from the system.[120] If a consumer-facing AI product or application put in place a set of inputs like those provided by Spot, it would be easier for regulators (and even consumers) to understand what the consumer-facing AI product or application is doing, thereby increasing usability and trustworthiness.

Similarly, any outputs provided by consumer-facing legal AI should be regularly reviewed by a licensed attorney for accuracy and bias. Developers of consumer-facing products should be required to keep the results of these tests—both inputs and outputs—available in a reproducible way, to maintain the product’s consistency and allow the general public to understand any changes made to the products.

The regulatory framework built out of the seven requirements listed cannot exist in a vacuum, and it will be essential that the providers following these regulations can be certified in some way to demonstrate to consumers that their tool meets the framework. Certification is tricky, however, and requiring a regulatory body to also be a certification body presents additional challenges. A potential solution exists with self-certification.

If a provider can prove they meet the five requirements, it would be simple to state that they are providing a certified generative AI product to the legal marketplace, and a specified seal or marking on the product would allow some degree of assurance for any self-represented litigant (or general consumer) that the product meets, at a minimum, these five requirements. If a provider is sued by a consumer and the provider can prove they met the standards, then the provider would be entitled to a rebuttable presumption that they met the applicable standard of care in providing the public with a product that utilizes generative AI. In addition, the presumption would allow any organization providing a generative AI tool to the general public a rebuttable presumption that, as a matter of law, their product is not engaged in the unauthorized practice of law. By allowing these presumptions to attach to any product that follows the regulations and is self-certified, the risk of an onslaught of lawsuits related to the use of these products will be, if not minimized, then streamlined.

The acts of regulation (or self-regulation) and certification (or self-certification) seem relatively easy compared to the bigger issue at hand: enforcing the regulations against bad actors. Because artificial intelligence products are being created quickly and marketed to consumers even more quickly, a method of enforceability would be ideal to make the regulations meaningful. But what is the best way to enforce?


 

IV.  I wonder, now, What the Rules of Battle are”[121]: A Proposed Scheme for Enforcement

The problem with enforcement is that it is difficult. This proposed scheme—where producers of products utilizing generative AI self-certify that they have followed the regulations—helps front-load some of the logistics regarding enforcement and ensuring products in the marketplace aren’t created by bad actors. But what if the best way forward isn’t a ban on “unsanctioned” (or uncertified) artificial intelligence for self-represented litigants or those seeking legal self-help in the early stages of an issue? What if the better path forward is a liability shield for providers that, if they meet certain standards, there is a shifting presumption in their favor that they are not liable for any harm that may come from use of the product?

 

. . . Sarah is intrigued, navigates to the website, and gets started. She uses the AI product to help her split the marital assets but, during that process, the artificial intelligence tool incorrectly identifies a marital asset as non-marital, and doesn’t include it in the marital settlement agreement it ultimately drafts . . .

 

Where the provider is self-certified pursuant to the proposed regulatory scheme and claims the certification on its site, it would be subject to a presumption that the work it produces (or the product itself, or both) has not engaged in the unauthorized practice of law. If Sarah sues the company that provided the generative AI tool that ultimately drafted the marital settlement agreement, the company would not have the benefit of the rebuttable presumption that, while an injury may have occurred, the company met the applicable standard of care. In the alternative, the company could be subject to a presumption that their conduct was willful, wanton, or reckless, because they failed to self-certify.

This proposed enforcement scheme allows the company to include a clear and unambiguous waiver of liability on their site, which would allow for a decided advantage at the summary judgment stage of a case. That waiver also provides significant notice to the consumer that, regardless of their intent in using the generative AI tool, that tool will not act as an attorney or practice law for them, which should provide many users with the information needed to get a second opinion on any documents or information with which they want to move forward in the justice system.

In the alternative, if the provider above does not meet the benchmarks and does not self-certify, there is no presumption. They will be subject to a state or federal jurisdiction’s statutes and regulations regarding unauthorized practice of law, as well as potential civil or criminal liability for the information they provide.

In the scheme proposed herein, there is no regulating body.[122] Given how little the legal field seems to understand technology, generally, and generative artificial intelligence, specifically, it may be a good thing to not have a formal body regulating these tools. On the other hand, courts are no better equipped to do so. It is up to the court to determine if the provider adequately met the benchmarks for self-certification and the initial burden is on the provider to prove compliance with those benchmarks. Sarah could, of course, work to overcome these presumptions. Perhaps the company’s statement regarding transparency is overblown. Perhaps Sarah can produce documentation that shows her data was not being deleted and, instead, was being used to further the company’s development of AI products. The case would then proceed as any other case, and Sarah would be entitled to damages reasonable for her particular situation under the laws and regulations of her local jurisdiction.

Some scholars have suggested that courts should bar self-represented litigants from using artificial intelligence until the user (or the court) can be assured of its utility or, in the alternative, courts should allow pro se litigants to use vetted artificial intelligence products, but only if the use of those products is disclosed to the court.[123] This is challenging—and maybe impractical—because it will be extremely difficult for a court to find out that a self-represented litigant is using an “unsanctioned” form of artificial intelligence? Requiring disclosure is fine, but what happens if the litigant doesn’t disclose? It would be very hard to enforce a scheme where products must be proven to be useful and use is required to be disclosed to the court (by the very people who may not understand the legal system in the first place).

Another proposal, of course, is to simply ban all generative AI products that may offer “legal advice,” and put enforcement in the realm of a total ban. There are significant problems with attempting to ban something altogether, not the least of which being the difficulty in enforcement. First, users would need to understand that they are receiving legal advice for them to report that advice to an authority that can issue a ban. That level of understanding is not likely for users of consumer-facing generative AI applications who may not have any familiarity with the legal system—which is what led them to use the product in the first place.

Next, if companies are required to instruct their generative artificial intelligence models to “not give legal advice,” there is no clear definition of legal advice. “Many courts have attempted to set forth a broad definition of the practice of law. Being of the view that such is nigh onto impossible and may injuriously affect the rights of others not here involved, we will not attempt to do so here. Rather we will do so only to the extent required to settle the issues of this case.”[124] While courts are reluctant to define legal advice, those who work in professional responsibility (and even other attorneys) would likely say they recognize legal advice when they see it—but what if they don’t see it? Banning the offering of legal advice is not akin to other directives like eliminating bias or excluding harmful content; those two things (arguably) have universal meanings. Every determination of legal advice, law practice, or unauthorized practice of law is made post-hoc, which makes it nearly impossible to stop before it happens. And generative AI will give legal advice no matter how well-trained or well-prompted; artificial intelligence is generative, not definitive.

In addition to these problems, banning generative AI from offering legal advice has the potential to stifle innovation in a massive and problematic way. The advent of generative AI is inspiring law schools to think about their curricula,[125] offering a variety of potential functions in the healthcare sector like routine information gathering and diagnosis,[126] and detecting errors, alerting users to fraud, and monitoring transactions in the financial field.[127] A complete ban of generative AI in the legal field could have a trickle-down, chilling effect in other industries that can damage innovation and progress overall and can significantly limit access to justice.

V.  Through the Looking Glass[128]: Predictions for the Future

. . . Sarah presents her faulty marital settlement agreement to John, who agrees to take a look. Without Sarah’s knowledge, John has performed his own research using a generative AI product he found on- line, but the site John used has prominent disclaimers about legal advice, discloses the data it is gathering and how it is being used (and then deleted), provides transparency about the generative AI model on which it is built, and provides a seal of certification, so John believes it to be helpful and performing in a way that legal professionals have deemed trustworthy.

John compares his draft to Sarah’s and notices a glaring error. Sarah’s form doesn’t include the 25-foot fishing boat they purchased shortly after they were married. John wonders how Sarah could have missed such a major asset, and he immediately begins to question what else may be wrong and what her intentions were in providing him the document . . .

 

Even when lawyers are involved, situations like the one described between Sarah and John are common. Emotions are typically high during family law cases and, even where the parties seek to work together amicably, things can go awry. Sarah did not approach the use of the unregulated generative AI tool any differently than John. She likely added similar information to the tool that built the marital settlement agreement, and believed it would correctly classify their assets based on advertising and testimonials on the site. It’s probable that John took the same approach, but the site he self-selected was certified under the scheme provided above.

Were John and Sarah to go to a hearing to hammer out this marital settlement agreement and both were to discuss their use of generative AI in helping them navigate their divorce, a court would be able to look at the regulatory guidelines in this article, the self-certification provided (and not provided) on the websites used by John and Sarah, and presume that the tool used by John has, if not more legitimacy, then more credibility in the eyes of the court. Sarah, then, could pursue action against the website she used for failing to use due diligence in providing legal services to the public. The presumptions regarding legal advice would not attach to the site, nor would the assumption that the providers of the site acted with the appropriate standard of care for a case like Sarah’s.

In all of this, whether a case like the one described herein or an actual case, it is hard to identify at what point these generative AI models are engaging in “problematic” behavior. Theyare typically generically marketed and, depending on what users ask them to do, may never cross over into actual legal advice, create documents, or provide information thatimpacts anyone. With that in mind, for future-looking applications it would be best to create a generic foundational model on which other generative AI products for self-represented litigants could be built. At a minimum, a generic foundational model should offer a robust disclaimer that recommends that users seek legal help from an organization or attorney and not rely solely on the generative AI product.

Today, because of the hullabaloo surrounding generative AI, most websites marketing these tools to self-represented people (or people who are at the early stages of navigating the system who may have not-yet hired an attorney) do provide a disclaimer about legal advice. A model disclaimer, however, that takes into account the proposed regulations in this article, could read something like this:

 

By accessing, viewing, or engaging with __________ service, this website, and anything it may produce, you agree that you understand that you are asking a legal question and should seek qualified help from an attorney or legal aid organization in your area. The _________ model/service is providing general information and not specific legal advice, and the information provided by you is not privileged and does not create an attorney-client relationship.

 

Any assertion that regulation, self-certification, model platforms, or even model disclaimers will solve the problems inherent with consumer-facing generative AI is probably oversimplifying the issue. People are using—and will continue to use—generative AI as a triage system for legal problems. The success of that triage does not depend on regulations or disclaimers, but on the transparency with which we discuss generative AI, its issues, and its opportunities—and the conversation is just beginning.



            [1].   Lewis Carroll, Through the Looking-Glass, and What Alice Found There. (photo. reprt. 2013) (London, MacMillan & Co.1882). Literary analysts and critics have claimed Carroll’s sequel to Alice in Wonderland symbolizes the conflict between the chaos of the real world and a rational ideal of what the world should be. Similarly, the conversations around the use of generative AI illustrate a conflict between what the legal profession believes should be ideal or perfect use of the platforms, while the platforms themselves represent a kind of chaos that has been thrust upon the profession.

             *   Assistant Professor of Law, Stetson University College of Law. The author thanks Sam Harden for his excellence and inspiration as a co-author and Stetson University College of Law for its support of this Article. The attendees of the 2024 Legal Services Corporation’s Innovations in Technology Conference were aspirational in their pursuit of access to justice. Thanks to Professors Catherine Cameron, Alicia Jackson, Ellen Podgor, William Bunting, and Liz Boals for their thoughtful feedback and accountability throughout this process.

           **   J.D., Florida State University. Senior Innovation Manager, Probono Net.

            [2].   Or they should, anyway. The horror stories of attorneys failing to check their sources for relevance (or existence) date back decades. Notorious example include Marcia Clark being sanctioned for failing to use a citator to check her sources during the O.J. Simpson trial in the 1990s (https://www.youtube.com/watch?v=QFOY0Glg0gU [https://perma.cc/74HN-2P3W]) to, today, attorneys citing cases that have been entirely made up by generative AI and failing to check if their sources exist. See Mata v. Avianca, Inc., 1:22-cv-01461 District Court, S.D. New York; Associated Press, Michael Cohen says he unwittingly sent AI-generated fake legal cases to his attorney, NPR (Dec. 30, 2023), https://www.npr.org/2023/12/30/1222273745/michael-cohen-ai-fake-legal-cases#:~:text=Hourly%20News-,Michael%20Cohen%20sent%20AI%2Dgenerated% 20fake%20legal%20cases%20to%20his,were%20submitted%20to%20a%20judge [https://perma .cc/EV4Q-UEZP].

            [3].   Carroll, supra note 1, at 24.

            [4].   See Ananya, Generative AI Grabbed Headlines this Year. Here’s why and what’s next, Sci. News (Dec. 11, 2023, 11:30 AM), https://www.sciencenews.org/article/generative-ai-chatgpt-safety [https://perma.cc/4V8T-N57K] (providing a brief, accessible explanation of generative AI and why it was such a major piece of news in 2023).

            [5].   Id.

            [6].   Ajay Bandi et al., The Power of Generative AI: A Review of Requirements, Models, Input-Output Formats, Evaluation Metrics, and Challenges, 15 Future Internet 1, 2 (2023) (describing generative artificial intelligence and aiming to investigate the fundamental aspects of generative AI systems, including requirements, models, input-output formats, and evaluation metrics).

            [7].   George Lawton, What is Gen AI? Generative AI explained, TechTarget, https://www.techtarget.com/searchenterpriseai/definition/generative-AI#:~:text=The%20techno logy%2C%20it%20should%20be,in%20the%201960s%20in%20chatbots [https://perma.cc/ AFF3-SZDD] (last visited Feb. 1, 2025) (providing basic and easy-to-understand information about generative AI).

 

            [8].   David Meyer, The Cost of Training AI Could Soon Become Too Much to Bear, Yahoo! Fin. (Apr. 4, 2024), https://finance.yahoo.com/news/cost-training-ai-could-soon-101348308.html [https://perma.cc/8VCE-HWAF].

            [9].   See Ana Rico, Will Robots Take Our Jobs, BU Arts & Scis. (Aug. 28, 2023), https://www.bu.edu/cas/the-big-question-will-robots-take-our-jobs/ [https://perma.cc/9582-6ZVG] (discussing what the popularity of generative AI means for things like society, privacy, transparency, and employment).

         [10].  See Chris Louie, Issue#11: Do We Want A Self-Serve AI Future?, LinkedIn (Apr. 7, 2024), https://www.linkedin.com/pulse/issue-11-do-we-want-self-serve-ai-future-chris-louie-y6uxe/ [https://perma.cc/29Y9-LEZJ].

         [11].   See generally Kylie King & Aishwarya Ganguli, Impact of Artificial Intelligence (AI) on Entrepreneurship, PennState Soc. Sci. Rsch. Inst. (Mar. 20, 2024), https://evidence2impact.psu.edu/resources/impact-of-artificial-intelligence-ai-on-entrepreneur ship/#:~:text=Artificial%20intelligence%20(AI)%20has%20created,a%20rapidly%20changing%20business%20environment [https://perma.cc/WBZ8-S7EB] (discussing key advantages and disadvantages artificial intelligence poses for prospective entrepreneurs and existing businesses).

         [12].   Urian B., A New York Woman Used ChatGPT to Write a Letter Citing Legalities to Get Landlord to Fix Her Apartment Appliance, Tech Times (updated Apr. 23, 2023), https://www.techtimes.com/articles/290713/20230423/ [https://perma.cc/WU34-QMBC].

         [13].   Jessica Klein, How ChatGPT Can Help Abuse Survivors Represent Themselves in Court, Fast Co. (Mar. 9, 2023), https://www.fastcompany.com/90861189/how-chatgpt-can-help-abuse-survivors-represent-themselves-in-court [https://perma.cc/C43G-5CKQ] (discussing ways in which generative AI products like ChatGPT can help certain populations navigate the legal process).

         [14].   South Africa’s First AI Lawyer is Here, Legal Interact, https://legalinteract.com/ai-lawyer/ [https://perma.cc/9TU9-99TD] (introducing a product, the first of its kind in South Africa, designed to help citizens gain access to justice and increase the dispensation of legal knowledge).

         [15].   AI Lawyer: Your Personal Legal AI Assistant, ailawyer, https://ailawyer.pro/ [https://perma.cc/8J8J-NXHC] (advertising an AI legal assistant for consumers and lawyers).

         [16].   Ask AI Lawyer – Free legal information online with the help of AI, Ask AI Lawyer. Com, https://www.askailawyer.com/ [https://perma.cc/2JQG-AQTT].

         [17].   Megan Cerullo, AI-powered “robot” lawyer won’t argue in court after jail threats, CBS News (Jan. 26, 2023), https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/ [https://perma.cc/DLF9-SMD2] (explaining how a CEO planned on using an AI powered bot to help self-represented litigants in the courtroom and the fallout that resulted from his public attempts to do so).

         [18].   Project Runway: I Started Crying (Bravo TV Nov. 21, 2007). During the introduction to the long-running televised fashion design competition, longtime host and supermodel, Heidi Klum, proclaims that in fashion “one day you’re in, and the next day you’re out.” Id. That remains true not only in fashion, but in legal technology.

         [19].   Ian Prasad Philbrick, The U.S. Regulates Cars, Radio, and TV. When Will It Regulate A.I.?, N.Y. Times (Aug. 24, 2023), https://www.nytimes.com/2023/08/24/upshot/artificial-intelligence-regulation.html [https://perma.cc/W4DD-48C9] (discussing the need for U.S. regulators to move quickly regarding regulating artificial intelligence and the likelihood of that actually happening).

         [20].   See generally General Data Protection Regulation (GDPR), intersoft consulting, https://gdpr-info.eu/ [https://perma.cc/AM9C-ZPP3].

         [21].   AI Rules: What the European Parliament Wants, Eur. Parliament (Oct. 21, 2020, 8:58 AM), https://www.europarl.europa.eu/news/en/headlines/society/20201015STO89417/ai-rules-what-the-european-parliament-wants [https://perma.cc/DN2A-2NWN] (describing how MEPs are shaping artificial intelligence legislation in the EU in an effort to boost innovation while protecting civil liberties and ensuring safety for those who use the products).

         [22].   Artificial Intelligence: Threats and Opportunities, Eur. Parliament (Sept. 23, 2020, 9:08 AM), https://www.europarl.europa.eu/news/en/headlines/priorities/artificial-intelligence-in-the-eu/20200918STO87404/artificial-intelligence-threats-and-opportunities [https://perma.cc/ PM4X-QTW8] (explaining how artificial intelligence impacts a human’s professional prospects, and threatens a society’s security and democracy).

         [23].   Id.

         [24].   MEPs Ready to Negotiate First-Ever Rules for Safe and Transparent AI, Eur. Parliament (June 14, 2023, 12:52 PM), https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai [https://perma.cc/JB9F-WHRE] (expounding upon the EU rules about artificial intelligence and how those rules aim to protect health, safety, and fundamental rights and keep them from experiencing any harmful effects).

         [25].   Id.

         [26].   Id.

         [27].   Id.

         [28].   See id.

         [29].   See generally Jedidiah Bracy & Caitlin Andrews, EU Countries Vote Unanimously to Approve AI Act, iapp (Feb. 2, 2024), https://iapp.org/news/a/eu-countries-vote-unanimously-to-approve-ai-act [https://perma.cc/X7NV-CPGQ].

         [30].   Infra note 31.

         [31].   Alex Engler, The EU AI Act Will Have Global Impact, but a Limited Brussels Effect, Brookings (June 8, 2022), https://www.brookings.edu/articles/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/ [https://perma.cc/9MP9-28N6] (explaining the Brussels Effect and how, while artificial intelligence may have some important impacts on global markets, the EU alone will not be in a position to set a comprehensive new standard for artificial intelligence that will be used internationally).

         [32].   Anu Bradford, The Brussels Effect, 107 NW U. L. Rev. 1, 6 (2012) (examining the underestimated global power exercised by the European Union through its legal institutions and standards, and how the European Union has successfully influenced the rest of the world).

         [33].   Id.

         [34].   Id.

         [35].   Id.

         [36].   Id. at 11.

         [37].   Zeyi Yang, China Isn’t Waiting to Set Down Rules on Generative AI, MIT Tech. Rev. (May 31, 2023), https://www.technologyreview.com/2023/05/31/1073743/china-generative-ai-quick-regulation/ [https://perma.cc/VAV4-X5H2] (discussing China’s draft regulations as a mixture of aggressive intervention in technology and sensible AI restrictions and the way western countries should follow suit).

         [38].   Hui Xu et al., China’s New AI Regulations, Latham & Watkins Client Alert Commentary, Latham & Watkins LLP (Aug. 16, 2023), https://www.lw.com/admin/ upload/SiteAttachments/Chinas-New-AI-Regulations.pdf [https://perma.cc/464W-SQH3] (citing Cyberspace Administration of China’s Office of Cyberspace Affairs).

         [39].   Id.

         [40].   Id.

         [41].   Id.

         [42].   Id.

         [43].   Id.

         [44].   Id.

         [45].   Id.

         [46].   Matt Sheehan, What the U.S. Can Learn from China About Regulating AI, Foreign Pol’y (Sept. 12, 2023, 3:04 PM), https://foreignpolicy.com/2023/09/12/ai-artificial-intelligence-regulation-law-china-us-schumer-congress/ [https://perma.cc/2QPL-2XJN] (discussing the things the United States can learn from China’s regulation of AI).

         [47].   Matt Sheehan, China’s AI Regulations and How They Get Made, Carnegie Endowment for Int’l Peace (July 10, 2023), https://carnegieendowment.org/research/2023/ 07/chinas-ai-regulations-and-how-they-get-made [https://perma.cc/JN4Y-MVSJ].

         [48].   Exec. Order No. 14,110, 88 Fed. Reg. 75,191 (Oct. 30, 2023).

         [49].   Id.

         [50].   National Defense Authorization Act for Fiscal Year 2024, Pub. L. No.118-31 (2023).

         [51].   New York City Department of Consumer and Worker Protection, https://rules.city ofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf [https://perma.cc/LMP5-28YD] (establishing a rule that seeks to implement legislation required by the EEOC to monitor automated employment decision tools powered by artificial intelligence).

         [52].   Carroll, supra note 1, at 36.

         [53].   Bob Glaves, What Do We Mean When We Say Access to Justice?, Chi. Bar Found., https://chicagobarfoundation.org/bobservations/what-do-we-mean-when-we-say-access-to-justice/ [https://perma.cc/QYD5-TKA8] (defining access to justice and the roles of individuals and corporations in aiding in access to justice).

         [54].   Id.

         [55].   As stated explicitly by the DOJ, the principles are: (1) Expanding Access – expanding access to legal systems by increasing the availability of legal assistance; (2) Accelerating Innovation – supporting research, data and innovative strategies to improve fairness and efficiency; and (3) Safeguarding Integrity – promoting policies and reforms that improve accountability” Off. for Access to Just., U.S. Dep’t of Just., http://www.justice.gov/atj [https://perma.cc/743A-4DD8].

         [56].   Id.

         [57].   Id.

         [58].   Id.

         [59].   Id.

         [60].   Robert W. Gordon, Lawyers, the Legal Profession & Access to Justice in the United States: A Brief History, 148 Daedalus 177, 178 (2019) (examining the history of access to justice in the civil system and the role of attorneys and legal professionals in both promoting and restricting that access).

         [61].   Id. at 178–79.

         [62].   Id. at 179.

         [63].   Id.

         [64].   Id. at 180 (citing Reginald Heber Smith, Justice and the Poor: A Study of the Present Denial of Justice to the poor and of the Agencies Making More Equal Their Position Before the Law With Particular Reference to Legal Aid Work in the United States (1919)).

         [65].   Fiscal year 2023 Budget Request, Legal Servs. Corp., https://www.lsc.gov/our-impact/publications/budget-requests/fiscal-year-2023-budget-request#:~:text=LSC’s%20approp riation%20has%20increased%20only,over%20the%20last%20three%20decades [https://perma. cc/ZU34-W4CJ].

         [66].   Gordan, supra note 60, at 181.

         [67].   Id. at 185.

         [68].   See generally Ashley Krenelka Chase, Aren’t We Exhausted Always Rooting for the Anti-Hero? Publishers, Prisons, and the Practicing Bar, 56 Tex. Tech. L. Rev. 525, 551–54 (2024) (arguing that the practicing bar should be held responsible for advocating for access to justice for all).

         [69].   See Diane Leigh Babb, Take Caution When Representing Clients Across State Lines: The Services Provided May Constitute the Unauthorized Practice of Law, 50 Ala. L. Rev. 535 (1999) (illustrating cases where attorneys have been found to be practicing law in an unauthorized manner & across state lines).

         [70].   Justice Needs and Satisfaction in the United States of America, The Hague Inst. for Innovation of L. 1, 7 (2021), https://www.hiil.org/wp-content/uploads/2019/09/Justice-Needs-and-Satisfaction-in-the-US-web.pdf [https://perma.cc/Z4ZV-9UTR].

         [71].   Id.

[72].   National Center for State Courts, Civil Justice Initiative: The Landscape of Civil Litigation in State Courts, 1, 31 (2015), https://www.ncsc.org/__data/assets/pdf_file/0020/13376/ civiljusticereport-2015.pdf [https://perma.cc/DK33-599G].

         [73].   Id.

         [74].   Id.

         [75].   Anna E. Carpenter et al., Judges in Lawyerless Courts, 110 Geo. L.J. 509, 540–45 (2022) (theorizing that civil courts were not designed for unrepresented litigants and that judicial role failure is one symptom of the mismatch between courts’ lawyer-driven dispute resolution design and the social, economic, and interpersonal problems they are supposed to solve for users who have no legal training).

         [76].   Efforts have been made, for instance, to make the law more accessible to those who do not have access to legal materials or law libraries, or even the internet. See Ashley Krenelka Chase, Let’s All Be…Georgia? Expanding Access to Justice for Incarcerated Litigants by Rewriting the Rules for Writing the Law, 74 S.C. L. Rev. 389 (2022) (discussing methods for publishing and disseminating the law that would increase access to justice).

         [77].   Justice Needs and Satisfaction, supra note 70, at 70.

         [78].   Id.

         [79].   Drew Simshaw, Ethical Issues in Robo-Lawyering: The Need for Guidance on Developing and Using Artificial Intelligence in the Practice of Law, 70 Hastings L.J. 173, 179 (2018) (presenting an early exploration of artificial intelligence in the legal profession and identifying characteristics of what were then emerging services).

         [80].   DoNotPay started off as an app for contesting parking tickets, and currently sells services which generate documents on legal issues ranging from consumer protection to immigration rates, generated via automation and artificial intelligence. Jaclyn Kelley, ROBOT LAWYER: App allows you to sue anyone with press of a button, Fox 5 DC (Oct. 18, 2018), https://web.archive.org/web/20191016012118/https://www.fox5dc.com/news/robot-lawyer-app-allows-you-to-sue-anyone-with-press-of-a-button [http://perma.cc/7EJH-AJW8]. In 2021, DoNotPay raised $10 million from investors and became a global phenomenon, causing many people to talk about the demise of lawyers and the rise of robolawyers. Gillian Tan, Robot Lawyer DoNotPay, valued at $210 Million, Plans to Target Small Businesses, Ins. J. (Aug. 2, 2021), https://web.archive.org/web/20220920171015/https://www.insurancejournal.com/news/national/2021/08/02/625401.htm [https://perma.cc/SY7W-DJEY].

         [81].   See DoNotPay,  https://donotpay.com/ [https://perma.cc/UP43-JHCP].

         [82].   Sara Merken, Lawsuit Pits Class Action Firm Against ‘Robot Lawyer’ DoNotPay, Reuters (Mar. 9, 2023, 3:10 PM), https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/ [https://perma.cc/78BK-3ADC].

         [83].   Faridian v. DoNotPay, Inc., CGC-23-604987 (Super. Ct. of Cal., San Francisco County 2023), https://fingfx.thomsonreuters.com/gfx/legaldocs/dwvkdzbjxpm/Faridian%20v.%20Do NotPay%20Complaint.pdf [https://perma.cc/DZQ8-T94Y] (explaining the alleged misconduct performed by DoNotPay and the ways in which it may or may not be engaging in unauthorized conduct).

         [84].   Id.

         [85].   Bobby Allen, A robot was schedule to argue in court, then came the jail threats, Hᴇᴀʟᴛʜ Nᴇᴡs a. (Jan. 25, 2023, 6:05 PM), https://health.wusf.usf.edu/2023-01-25/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats [https://perma.cc/UJB7-SF6B] (noting that DoNotPay is facing other legal challenges, some of which should not be ignored. The CEO of the company, Joshu Browder, took to Twitter to ask someone to argue their case using DoNotPay and an AI Text Generator, which would be able to observe the hearing through an earbud in the pro se litigants’ ear and make arguments. Browder quickly faced threats of criminal charges for his actions and backed off).

         [86].   The Florida Bar v. TIKD Servs. LLC, 326 So. 3d 1073, 1076 (Fla. 2021).

         [87].   Id.

         [88].   Id.

         [89].   Id. at 1081 (emphasis added).

         [90].   See generally id.

         [91].   Jim Ash, AI Tools & Resources Committee to Draft Rules and an Ethics Opinion, The Fla. Bar (Sept. 20, 2023), https://www.floridabar.org/the-florida-bar-news/ai-tools-resources-committee-to-draft-rules-and-an-ethics-opinion/ [https://perma.cc/6GE9-K92P].

         [92].   Proposed Advisory Opinion 24-1 Regarding Lawyer’s Use of Generative Artificial Intelligence – Official Notice, The Fla. Bar (Nov. 13, 2023), https://www.floridabar.org/the-florida-bar-news/proposed-advisory-opinion-24-1-regarding-lawyers-use-of-generative-artificial -intelligence-official-notice/ [https://perma.cc/Q9KF-39X2].

         [93].   See Chase, supra note 68, at 555–56.

         [94].   Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law: Executive Summary, State Bar of California Standing Committee on Professional Responsibility and Conduct 1, 3–5, https://www.calbar.ca.gov/Portals/0/documents/ethics/ Generative-AI-Practical-Guidance.pdf [https://perma.cc/K4M6-SXBW].

         [95].   See W. Va. State Bar v. Earley, 109 S.E.2d 420, 439–40 (W. Va. 1959) (holding that the State Compensation Commissioner, as an administrative agency or tribunal, did not have the power or authority to permit a non-attorney agent to act as an attorney in cases before him).

         [96].   See The Fla. Bar v. TIKD Servs. LLC, 326 So. 3d 1073, 1082 (Fla. 2021).

         [97].   Baron v. City of L.A., 469 P.2d 353, 358 (Cal. 1970) (en banc). This issue is particularly salient for those working in libraries as well as paralegals. The line for what constitutes UPL for these groups seems to be moving constantly, and people working in those professions often pontificate about what UPL means for them and how to avoid it. See generally Wendi Arant & Brian Carpenter, Where is the Line? Legal Reference Service and the Unauthorized Practice of Law (UPL)—Some Guides That Might Help, 38 Legal Reference Servs. Q. 235, 236 (1999); Ethical landmines on using nonlawyer staff, ABA (Nov. 2017), https://www.americanbar.org/ news/abanews/publications/youraba/2017/november-2017/ensure-your-paralegals-ethics-align-with-yours-/ [https://perma.cc/A3EG-SUH3].

         [98].   See            N.Y. Cnty. Law.sAss’n v. Dacey, 28 A.D.2d 161, 162 (N.Y. App. Div. 1967), rev’d, 21 N.Y.2d 694 (N.Y. 1998).

         [99].   Caroline Shipman, Unauthorized Practice of Law Claims Against LegalZoom—Who Do These Lawsuits Protect, and is the Rule Outdated?, 32 Geo. J. Legal Ethics 939, 940–41 (2019) (examining three allegations of unauthorized practice of law involving LegalZoom around the United States, and the responses to each of those allegations).

       [100].   Fergal McGovern, Why does GenAI give different answers when you ask the same question?, LinkedIn (June 6, 2024), https://www.linkedin.com/pulse/why-does-genai-give-different-answers-when-you-ask-same-mcgovern-yrlqe/ [https://perma.cc/E5V3-RT3Q]. Notably, these companies are not seeking to be transparent about the way their generative AI products work, instead keeping the algorithms proprietary and using elaborate marketing to make people think they’re reliable, without having to prove it.

       [101].   Use the Gemini web app, Gemini Apps Help, https://support.google.com/ gemini/answer/13275745?hl=en&co=GENIE.Platform%3DAndroid#:~:text=For%20some%20prompts%2C%20you%20can,draft%20you%20want%20to%20review [https://perma.cc/NE67-CQMT].

       [102].   supra note 92.

       [103].   Tex. Gov. Code  § 81.101(c) (2011).

       [104].   Proposed Opinions, N.C. State Bar, https://www.ncbar.gov/for-lawyers/ ethics/proposed-opinions/?utm_source=substack&utm_medium=email [https://perma.cc/MDA5 -3KF6] (last visited Sept. 17, 2024).

       [105].   Id.

       [106].   Supra note 92.

       [107].   Recommendations on Regulation & Use of Generative AI by Licensees, The State Bar of Cal. Comm. on Prof’l Responsibility & Conduct (Nov. 16, 2023), https://about blaw.com/bbpZ [https://perma.cc/8BD8-EAER].

       [108].   Id.

       [109].   Id.

       [110].   See generally Peter Henderson, Who is Liable When Generative AI Says Something Harmful, Stanford Univ. Human-Centered A.I. (Oct. 11, 2023), https://hai.stanford.edu/ news/who-liable-when-generative-ai-says-something-harmful [https://perma.cc/LF5E-TRBB] (discussing the ways in which courts will have to determine the liability of generative AI products which academics believe will likely be protected by the First Amendment).

       [111].   Carroll, supra note 1.

       [112].   In the wake of West Virginia v. EPA, 142 S. Ct. 2587 (2022) (holding that administrative agencies must point to clear congressional authorization when they issue politically or economically significant regulations), this seems especially true. See Louis J. Capozzi II, The Past and Future of the Major Questions Doctrine, 84 Ohio St. L.J. 191 (2023) (demonstrating that the major questions doctrine has a long and robust history and arguing that courts should not struggle when they seek to apply it).

       [113].   Kanan Dhru et al., Use of digital technologies in judicial reform and access to justice cooperation, Hague Inst. for Innovation of L. 1, 4–5 (2021), https://www.hiil.org/wp-content/uploads/2021/11/HiiL-Use-of-digital-technologies-in-judicial-reform-and-access-to-just ice-cooperation.pdf [https://perma.cc/Q8X9-LGAA].

       [114].   This presumption would apply if the company offering an AI tool is sued in any action that requires a negligence standard, such as defects in design, or failure to provide an adequate warning as outlined in the Model Uniform Product Liability Act, 44 Fed. Reg. 62,714, 62,721 (Oct. 31, 1979).

       [115].   See What are AI Hallucinations?, infra note 118.

       [116].   Eva Wolfangel (@evawolfangel), chaos.social (Apr. 13, 2023, 9:43 AM), https://chaos.social/@evawolfangel/110191797774375124 [https://perma.cc/A7DG-VB8K]; Eva Wolfangel, Der hinterlistige Therapeut, Zeit Online (Apr. 17, 2023, 8:09 PM), https://www.zeit.de/digital/2023-04/chatbot-psychologie-therapie-pharmaindustrie-manipulation [https://perma.cc/CV68-L4HH].

       [117].   What are AI Hallucinations?, IBM (Sept. 1, 2023), https://www.ibm.com/topics/ai-hallucinations [https://perma.cc/JBM4-6MJX].

       [118].   See, e.g., AI Principles, Gᴏᴏɢʟᴇ AI, https://ai.google/responsibility/responsible-ai-practices/ [https://perma.cc/M3TV-34UJ]; Patrick Farley et al., Overview of Responsible AI practices for Azure OpenAI models, Mɪᴄʀᴏsᴏғᴛ (Feb. 27, 2024), https://learn.microsoft.com/en-us/legal/cognitive-services/openai/overview [https://perma.cc/8K KA-PABC].

       [119].   Sᴜғғᴏʟᴋ LIT Lᴀʙ, https://spot.suffolklitlab.org/ [https://perma.cc/5HUY-KDLR].

       [120].   Id.

       [121].   Carroll, supra note 1, at 131–32.

       [122].   Given how little the legal field seems to understand technology, generally, and generative artificial intelligence, specifically, it may be a good thing to not have a formal body regulating these tools. On the other hand, Courts are no more equipped to do that.                

       [123].   Jessica Gunder, Why Can’t I Have a Robot Lawyer? Limits on the Right to Appear Pro Se, 98 Tul. L. Rev. 363, 40610 (2024) (studying historic limitations on the right to appear pro se and considering how those limits impact litigants who are hoping to use artificial intelligence to assist them in court).

       [124].   State v. Sperry, 140 So.2d 587, 591 (Fla. 1962).

       [125].   See generally Jonathan H. Choi et al., ChatGPT Goes to Law School, 71 J. of Legal Educ. 387 (2022) (describing law professors’ process of using ChatGPT to generate answers on four real exams from the University of Minnesota Law School, and discussing implications for legal education and lawyering in light of the results of the exams).

       [126].   Niam Yaraghi, Generative AI in health care: Opportunities, challenges, and policy, Brookings (Jan. 8, 2024), https://www.brookings.edu/articles/generative-ai-in-health-care-opportunities-challenges-and-policy/#:~:text=The%20proliferation%20of%20generative%20AI, %2C%20diagnosis%2C%20and%20even%20treatment [https://perma.cc/7GRD-SNQN] (discussing the increased reliance on AI-assisted decision-making in the healthcare industry).

       [127].   Gen AI: Why finance should lead, KPMG (last visited Feb. 7, 2025), https://kpmg.com/ us/en/articles/2024/gen-ai-why-finance-should-lead.html#:~:text=Many%20of%20Gen%20AI’s %20unique,execution%20for%20the%20entire%20enterprise [https://perma.cc/K8HS-EGCX] (opining that the utilization of AI in corporate finance makes good business sense).

       [128].   Carroll, supra note 1.

Leave a Reply

Your email address will not be published. Required fields are marked *