CBA Record September-October 2024

lawyers are disappointed with the results from AI. The bar is high for legal AI solutions. The law is not a field where “that’s about right” is acceptable. With such high miss rates, lawyers currently face several choices; all have drawbacks. 1. Scrutinize every citation, legal propo sition, clause, memo, or brief that the AI generates. And check again to determine if anything is missing. This option adds significant labor and cost to a solution whose fundamental use case is cost savings. 2. Trust that the AI got it right. This is a high-risk move, as misrepresenting the law in court will give rise to dis cipline and sanctions at a minimum. 3. Wait for generative AI to improve. 4. Develop the firm’s own AI solution. 5. Increase training for law students and associates in generative AI. The promise of generative AI is immense. Established players—and perhaps some surprising new entrants, including innovative solutions from law firms themselves—should persist in their efforts to develop a truly reliable, halluci nation-free, generative AI solution. How ever, we’re not there yet. Until then, a cautious approach is advised: leverage AI where it can add value without compro mising the integrity of legal work. As generative AI becomes ever more embedded into the legal profession, con tinued ethical guidance from regulatory bodies will be crucial (see the Practical Ethics column in this issue). Lawyers must stay informed and be proactive in addressing the opportunities and chal lenges that generative AI presents, and ensure that their practice remains innova tive, accurate, cost-effective, and ethical.

LPMT BITS & BYTES BY ANNE ELLIS AND STEPHEN MARSEILLE The Bar Is High: Is Generative AI Ready for the Law? F or decades, lawyers were slow to embrace technology. We are lag gards no longer. Lawyers have caught up with tech and are ready to use AI—but is AI ready for lawyers? be based on valid cases and relevant sec ondary sources but that are misgrounded, irrelevant, or miss crucial context.

The big legal technology players have pounced on the promise of AI, rolling out their own solutions for AI-driven legal research and drafting. With those rollouts come big claims. For example, Casetext claimed, “Unlike even the most advanced LLMs, CoCounsel does not make up facts, or ‘hallucinate,’ because we’ve implemented controls to limit CoCoun sel to answering from known, reliable data sources—such as our comprehensive, up-to-date database of case law, statutes, regulations, and codes—or not to answer at all.” Lexis, in announcing its Lexis+ AI solution, asserted, “Unlike other vendors, Lexis+ AI delivers 100% hallucination-free linked legal citations connected to source documents, grounding those responses in authoritative resources that can be relied upon with confidence.” Other companies in this space make similar claims. Unfortunately, for most practic ing attorneys, these claims can come up short. A recent study from Yale and Stan ford (May 2024) tested many of the lead ing legal AI tools and found that they hallucinate—a lot: between 35%-55% of the time, an AI answer to a legal query was incorrect or ungrounded in law. According to the study, these legal tools struggle to (1) accurately describe the holding of a case; (2) distinguish between legal actors (the court, attorneys, parties); or (3) understand the hierarchy of the legal system, including jurisdiction. RAG reduces hallucinations but doesn’t elimi nate them. Small wonder, then, that a survey from Bain (June 2024), found that

AI looms large among the uncertainties the legal profession faces. And generative AI—artificial intelligence that can gener ate content based on open-ended prompts, often producing work that resembles what a person might create—is becoming an ever-more-significant concern. Traditional ways of practicing law at firms of all sizes have come under increas ing pressure. Clients scrutinize bills more closely, driving firms to increase efficiency and cut down on nonbillable tasks. In this quest, technology, especially in the realms of legal research and document review, plays a pivotal role. Legal technology companies have responded by expand ing their offerings to include workflows, forms, clause libraries, and practice guides, among other resources, to help streamline research and drafting. AI’s promises are enticing: it will per form deep legal research and draft docu ments entirely through large language models (LLMs) or retrieval augmented generation (RAG). However, attempts by lawyers to use AI in the not-too-distant past did not go well. Attorneys have pre sented legal motions containing citations to cases that do not exist, or to citations that actually stand for the opposite legal proposition than what they were cited for. “Hallucinations” has been the collective term for these inaccuracies. But hallucinations can also include results that are even more insidious than outright fabrications: those that appear to

Anne Ellis, an Illinois lawyer and the Editorial Manager for The Council of State Governments Justice Center, is the Associate Editor of the CBA Record. Stephen Marseille is an Engagement Manager with Proactive Worldwide, Inc., focusing on technology and financial services; he is licensed in New York and Washington.

38 September/October 2024

Made with FlippingBook - Online magazine maker