The AI Disclosure Rules: Which Courts Now Require Lawyers to Admit AI Use

A roundup for practitioners of the new orders and ethics opinions on generative AI
Generative AI is now part of everyday law practice, but courts are moving faster than many firms to set boundaries. What’s emerging is a patchwork of judge‑specific orders, local rules, and ethics opinions that range from outright bans to targeted “tell us what you used and how” disclosures. Below is a concise survey of where disclosure is required now—and how those requirements differ—so you can build a compliance checklist that travels with your cases.
Three basic models are emerging. First, a small number of judges have simply barred AI‑assisted drafting in filings. Second, many courts allow AI but require disclosure (often coupled with a Rule 11‑style certification that a human verified authorities and facts). Third, some courts go further and direct lawyers to identify the specific tool and describe how it was used. Layered on top are bar ethics opinions that, while not court‑facing, impose client‑facing disclosure and consent duties when lawyers use AI tools.
The outliers: flat bans (with narrow carve‑outs)
In the Northern District of Ohio, Judge Christopher A. Boyko has issued a standing order forbidding any attorney or pro se party from using artificial intelligence “in the preparation of any filing,” an approach that is unusual but real. Contemporary surveys of judicial AI orders quote the order and link to the court’s posting.
In the Southern District of Ohio, similar concerns spurred restrictions. Reporting on Judge Michael J. Newman’s standing order explains it prohibits AI use to prepare filings, while clarifying that traditional legal search engines remain permissible. The order has been widely discussed by academics and bar groups and quoted directly in coverage.
These bans are the exception, but every lawyer needs to take care to understand the rules in each jurisdiction in which they practice.
The new normal: disclose and certify
Most federal judges who have acted are not banning AI; they’re demanding transparency and human accountability.
In the Eastern District of Pennsylvania, Judge Michael M. Baylson requires any lawyer or pro se party who used AI in preparing a filing to disclose that use and certify that every citation to law or the record has been verified as accurate. The court’s one‑page order leaves little room for doubt.
Northern District of Illinois judges have issued similar directives. Magistrate Judge Jeffrey Cole’s standing order requires parties to disclose in the filing that an AI tool was used and stresses that reliance on AI does not itself satisfy Rule 11’s “reasonable inquiry.”
In the Southern District of Ohio, Judge Jeffery P. Hopkins requires a separate declaration if “any portion” of a motion, brief, pleading, or other filing was generated with generative AI. Counsel must certify they reviewed the source material, verified accuracy, and complied with Rule 11—on pain of sanctions for noncompliance.
The Northern District of California has moved toward granular, practical guidance. Magistrate Judge Peter H. Kang’s comprehensive civil standing order distinguishes generative AI from commonplace tools like Westlaw, Lexis, word processors, and spell‑checkers, which are not the target. It requires that any brief or pleading whose text was created or drafted with any AI tool be identified as such, and it directs counsel to keep records sufficient to identify which portions were AI‑drafted.
Local rules that name the tools
Disclosure that AI was used and certification of Rule 11 compliance is not good enough for every court.
Magistrate Judge Gabriel A. Fuentes (N.D. Ill.) requires more than simple disclosure. He directs all parties who used a generative AI tool “to conduct legal research or to draft documents for filing” to disclose that use and to identify “the specific AI tool and the manner in which it was used.”
In Missouri, the Clay County Circuit Court (Seventh Judicial Circuit of Missouri) adopted Local Rule 3.3, “Disclosure of Artificial Intelligence Use.” Anyone filing a pleading who used a generative AI tool to conduct the legal research referenced in the pleading or to draft the filing must disclose that AI was used, identify the specific AI tool, and state how it was used. The rule also reminds filers that Missouri Supreme Court Rule 55.03(c) certifications still apply.
Ethics opinions: client‑facing disclosure and consent duties
Even where a judge has not spoken, bar authorities increasingly require lawyers to follow specific guidance when using AI.
The American Bar Association’s Formal Opinion 512 (July 29, 2024) synthesizes Model‑Rule duties and explains that lawyers using generative AI must safeguard confidentiality, supervise nonlawyer assistance (including vendors), communicate adequately with clients, and charge reasonable fees. As part of that framework, lawyers should not disclose client confidential information to AI providers without informed client consent or other applicable authorization.
Florida’s Ethics Opinion 24‑1 likewise emphasizes confidentiality and transparency. It advises that interacting chatbots communicating with clients or third parties must include a clear disclaimer that the chatbot is an AI program and not a lawyer, and it instructs lawyers to investigate an AI tool’s data retention and sharing practices before use. The opinion also cautions against billing practices that mask AI‑driven efficiencies.
Practical takeaways for litigators
First, check the judge’s page before you file. Orders like Judge Baylson’s (E.D. Pa.), Judge Cole’s and Judge Fuentes’s (N.D. Ill.), Judge Hopkins’s (S.D. Ohio), and Judge Kang’s (N.D. Cal.) each impose different disclosures—ranging from simple notice to identification of the tool and a sworn declaration with a Rule 11 certification. The differences are not cosmetic; they change what you must file, and when.
Second, know where bans apply (rare as they are) and what the carve‑outs are. Many of the bans do allow for use of traditional research from google, Westlaw, and Lexis. When those products use AI explicitly or under the hood, an attorney may still be in violation of the local rule.
Third, label and log AI assistance when required. In N.D. Cal., you must identify AI‑drafted text in the filing and be able to pinpoint which portions came from AI; in Missouri’s Clay County, you must name the tool and describe how it was used. Don’t wait until the night before a deadline to reconstruct your process.
Fourth, use a human‑in‑the‑loop review and be prepared to certify it. Orders like Judge Baylson’s and Judge Hopkins’s explicitly tether AI use to a human verification duty; failing to meet it risks sanctions or a stricken brief.
Fifth, treat client‑facing duties as mandatory everywhere. ABA Formal Opinion 512 is good practice across the U.S.
A moving target, but a discernible direction
Judges are still testing different approaches. Some appellate and statewide efforts have paused or focused internally (e.g., judicial‑branch AI policies), while trial judges continue to set chamber‑specific expectations. The trend line points away from blanket bans and toward transparency plus human accountability, often with tool‑specific identification and record‑keeping obligations.
Final word
For now, the safest default is simple: if AI helped you draft or research a filing, assume you may need to say so—and to certify what a human did. Build a short, repeatable protocol for (1) documenting your use, (2) verifying authorities and facts, and (3) labeling AI assistance where required. The particulars will vary by jurisdiction, but the through‑line—transparency backed by human accountability—is already here.