CBA Record January-February 2026

THE LEGAL PROFESSION AND THE JUDICIARY IN THE AGE OF ARTIFICIAL INTELLIGENCE

The Illinois Framework: Rules That Already Apply The Illinois Supreme Court's AI Policy (https://www.illinois courts.gov/News/1485/Illinois-Supreme-Court-Announces-Pol icy-on-Artificial-Intelligence/news-detail/), adopted in January 2025, explicitly states that the Illinois Rules of Professional Con duct and the Code of Judicial Conduct “apply fully to the use of AI technologies.” Three rules are particularly salient: l Rule 1.1 imposes a duty of competence that extends to law yers’ use of technology. Comment 8 requires Illinois lawyers to understand how AI tools work, what they can and cannot do reliably, and when human oversight is essential. l Rule 1.6 prohibits lawyers from revealing confidential client information without the client’s informed consent. When a lawyer inputs client information into a public AI platform that shares data with third parties or uses it to train models, that lawyer risks breaching Rule 1.6. l Rule 5.3 addresses supervision of non-lawyer assistants. AI tools function, in many respects, like nonlawyer personnel: They perform tasks under the lawyer's direction, but the lawyer remains responsible for the output. Remember that while AI can streamline legal work, it cannot replace human judgment. The Illinois Supreme Court’s policy further reinforces that “attorneys, judges, and self-represented litigants are accountable for their final work product” and must “thoroughly review AI-generated content before submitting it.”

For further helpful guidance on using AI in practice, Illinois lawyers should also review The Illinois Attorney’s Guide to Imple menting AI (https://iardc.org/Files/Implementing-AI-Guide) which the ARDC issued in October 2025. Connecting the Dots and Looking Ahead Law schools, firms, and courts operate independently, but their responsibilities overlap. Each should reinforce the same core prin ciples: Comply with legal and ethical requirements when using AI; be transparent about AI usage; implement clear, well-defined policies about how personnel are to use AI; identify and mitigate risks; and require ongoing comprehensive AI training. AI’s capabilities—and our temptation to rely on it—will keep growing, but core legal work demands human judgment, advo cacy, and ethics. AI may draft briefs or summarize cases, but legal professionals must be the ultimate decisionmakers who retain control over legal analysis and strategy. With the right boundar ies, AI can strengthen the practice of law. The rest is up to us.

Joel Bruckman is a partner at Smith Gambrell & Russell, specializing in cybersecurity, data privacy, and e-discovery; he serves as Vice Chair of the Executive Committee for AI 2035: The Legal Profession and the Judiciary in the Age of Artificial Intelligence.

AI Guardrails in Law Firms and Legal Practice

M any law firms are experiment ing with AI to assist with tasks such as drafting contracts, summarizing transcripts, and managing e-discovery, and they are implementing it at a fast pace. However, without clear AI policies, that speed creates risk for the firm and its clients. This is why firms should adopt writ ten AI policies explicitly defining permis sible uses, prohibited uses, and lines of accountability. A sound policy addresses several key areas: Bar input of confidential or privileged data into public AI platforms. Tools such as ChatGPT, Claude, Gemini, and others may use inputs for training or share them with third parties, even when terms of ser vice claim otherwise. Lawyers must verify and understand vendor data-sharing prac

tices or run the risk of violating Rule 1.6. Mandate that a licensed attorney review all AI-assisted work product before release. This is a professional duty: citations, facts, and analysis must be checked, as lawyers remain “accountable for their final work product” per the Illinois Supreme Court’s AI Policy, no matter the tool’s role. Stratify risk. Summaries of public opinions may be low risk, while draft ing substantive motions or legal advice is high-risk, demanding heightened scru tiny to prevent sanctions or malpractice. Always inform clients about AI prac tices. Lawyers must inform clients when AI is used, why, and what is being done to mit igate potential risks. Transparency builds client trust. Consider using engagement agreements that address the firm’s AI use. For example, the ARDC Guide contains

a sample informed client consent form. Provide ongoing training and moni tor AI use. Firms should not only provide periodic AI training for staff and attor neys, but they should also monitor third party vendors and audit their use of AI, recognizing that technological and data practices evolve rapidly. — By Joel Bruckman

CBA RECORD 21

Made with FlippingBook. PDF to flipbook with ease