CBA Record January-February 2026

THE LEGAL PROFESSION AND THE JUDICIARY IN THE AGE OF ARTIFICIAL INTELLIGENCE

Pre-GAI, rarely would sanctions be sought for defective cita tions by opposing counsel or imposed by a court; instead, these issues were handled without much fuss. Who hasn’t seen plead ings that referenced a case for a point it did not make, took dicta as the holding, or mangled case names, pinpoint cites, or quota tions? The judiciary typically saw these flaws as signs of haste, inex perience, or inattention, not dishonesty or breaches of profes sional integrity. Pleadings were corrected, arguments disregarded, and the system endured. Relying on AI requires skepticism and discipline, not alarm. The current anxiety about new research technologies has a histor ical parallel. In the 1970s, when electronic research first emerged, the profession fretted that keyword searching in place of subject indexed digests would cause lawyers to overlook controlling authorities and lead to a decline in research proficiency. As a summer associate in the mid-1970s, I watched a young partner demonstrate the firm’s new electronic research terminal. After 10 minutes of the screen continually blinking at us without producing anything, he looked at me and said, “Computers will never replace law libraries.” He was not alone. I recall smaller firms worrying about com puter time billed by the minute at rates that would make a con temporary associate drop their laptop. During my final year of law school, we learned how to compose effective search terms (similar to today’s AI prompt training), something that young partner hadn’t yet mastered. Gradually, the profession adapted, the technology improved, and electronic research reached the same level of trust as the bound reporters. Unverified AI is the contemporary version of electronic research with one crucial difference: Earlier errors dealt with what existed, but AI invents what never did. Old Rules, New Trouble Illinois currently lacks a rule governing the use of AI in legal fil ings. Instead, our Supreme Court issued an AI Policy in January 2025, warning that “[u]nsubstantiated or deliberately misleading AI generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be toler ated,” and stressing that existing ethical requirements for lawyers and judges remain unchanged. However, a policy is not a rule, so it does not create enforce able obligations or supply a standard for assessing conduct when that standard is violated. In the few years since Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023) received national attention, various sanc tions have surfaced with unsettling frequency. In Mata , counsel submitted a brief that included cases conceived by ChatGPT. The court imposed fines and required counsel to explain how the hal lucinations occurred, not because the lawyers acted in bad faith, but because the duty to verify did not disappear on GAI’s arrival.

More recent examples in Illinois include the Second Appellate District’s decision striking a self-represented litigant’s brief and dismissing his appeal due to reliance almost entirely on nonex istent authorities and quotations, leaving nothing substantive to review. Pletcher v. Village of Libertyville Police Pension Bd., 2025 IL App (2d) 240416-U. See also In re Baby Boy, 2025 IL App (4th) 241427 (counsel who relied on AI-generated citations without verification was fined, ordered to disgorge fees, and referred to the ARDC). Nor are judges immune. After U.S. Senator Chuck Grassley sent a letter about two mishaps by federal judges, the Director of the Administrative Office of the U.S. Courts replied that his office was “aware anecdotally of incidents in which judges have taken official action…relating to the integrity of court filings in which the use of AI tools was in question.” 2025-10-22, AO to Grassley re Judiciary Use of AI.pdf. A Better Approach: A Technology-Neutral Rule Courts are experimenting with solutions. Some judges require ver ification certificates or issue a standing order reminding lawyers of their duties of candor and competence. Others allow correction. Then there are sanctions, which, so far, have been uneven. A correctable hallucination in one courtroom may be treated as sanctionable misconduct in another. The difficulty lies in separat ing the genuinely troublesome from the merely correctable, while upholding the values that guide sanction law. A sounder approach is a technology-neutral rule that would bring statewide uniformity as Rule 137 has not. Uncertainty over how a court might react breeds motion practice. In contrast, a rule protects lawyers by promoting predictability and limiting motion practice. It offers guidance about when correction is appropriate and when sanctions are justified. Finally, it reduces the risk of disproportionate and inconsistent responses as judges and lawyers adjust to a new technology. At a minimum, a new Illinois Supreme Court rule should cover four elements: sanctions, verification, correctable defects, and self-represented litigants. 1. Sanctions. The rule should emphasize proportionality that matches the nature of the defect, reserving severe conse quences for pervasive occurrences that no reasonably careful lawyer could have overlooked. “Pervasive” means the fabri cated content significantly alters the legal argument or appears in multiple parts of a submission, leading to misinterpretation or misunderstanding of the legal stance. Being “reasonably careful” entails checking the authenticity of citations, quota tions, and similar assertions in a reliable source, such as print, a commercial database, or the source court’s website, before submitting the filing. 2. Verification. A rule would explicitly reinforce the obligation that, when using GAI in research and drafting, the filer must confirm the existence of cited authority, the cited authority says what is claimed and supports the point for which it is offered.

CBA RECORD 35

Made with FlippingBook. PDF to flipbook with ease