Privileged Until It Isn’t: LLMs and the Confidentiality Trap

AI is permitted for lawyers in almost every jurisdiction—but all the rules of ethics still apply and the technology makes compliance more than a little complicated. Specifically, the requirements of confidentiality go far beyond just clicking "no training" in your favorite public chat tool. The rules require a fact-based analysis and understanding of the technology in use.
The Danger
CEO of OpenAI, Sam Altman made it quite explicit: there’s no legal confidentiality for consumer ChatGPT chats today. With a valid subpoena, a provider can be compelled to produce your conversations. Further, while it is well known that large corporations can misuse customer data, it is unknown whether their data policies will shield a lawyer from responsibility when they violate those policies. Meanwhile, a court order now requires OpenAI to retain consumer ChatGPT chats even if they have been deleted by the users. This includes standard API calls through third party products. This means that chats with GPT made directly or indirectly are maintained even if you delete them. The risk is anything but hypothetical.
TL;DR (if you're too busy)
AI use is Permitted, but not magic: ABA Formal Op. 512 and state/city bars green‑light GenAI use subject to competence (1.1), confidentiality (1.6), supervision (5.3), communication (1.4), candor, and reasonable fees. New tools--same rules.
- Three different levers:
Training (model improvement) ≠ Retention (what’s stored, for how long, and who can access) ≠ Legal process(subpoenas, preservation orders). Turning off training does not immunize prompts/outputs from retention or discovery. - Safer lanes exist: Keep client data off of consumer products and develop internal models where client data is maintained locally.
1) Yes, you can use AI. No, the rules didn’t change, mostly
ABA Formal Opinion 512 (July 29, 2024) provides guidance that lawyers can use GenAI if they comply with existing duties. Most states have followed suit with similar opinions. Note that a few courts, including federal Judge Christopher Boyko from Ohio, disagree. The general green light to use AI tools is great, but there is no simple one-step compliance when it comes to confidentiality. As the ABA puts it:
Before lawyers input information relating to the representation of a client into a GAI tool, they must evaluate the risks that the information will be disclosed to or accessed by others outside the firm. Lawyers must also evaluate the risk that the information will be disclosed to or accessed by others inside the firm who will not adequately protect the information from improper disclosure or use because, for example, they are unaware of the source of the information and that it originated with a client of the firm. Because GAI tools now available differ in their ability to ensure that information relating to the representation is protected from impermissible disclosure and access, this risk analysis will be fact-driven and depend on the client, the matter, the task, and the GAI tool used to perform it.
2) Why consumer chat ≠ a Google search
Even for lawyers, there is a tendency to overshare in chats as opposed to search. As Sam Altman puts it, people tell ChatGPT “the most personal sh* in their lives.” That's all well and good, but lawyers cannot be sharing any personal sh* about their clients without strict controls. Further, there are simply more traps in using AI than simple search. There are no wholesale rules for search requiring companies to maintain search history (including results), no rules forbidding use of search in court pleadings, and no rules requiring disclosure of research methods outside of AI. While AI is superior to search in many ways, all lawyers must take care with they methods and the information they provide to the AI tool.
3) The potential traps: training vs. retention vs. subpoenas
- Training: Will the provider use your data to improve the model? Enterprise accounts will almost always default to no and consumer accounts will often have an option to ensure that no training will take place on your chats. Whatever tool you may be using, it is important to ensure that no model training is allowed relating to your use.
- Retention: Will your prompts/outputs/logs be stored (and for how long)? Even with no‑training, logs may exist for abuse monitoring, support, legal holds, or really any use that the AI company may think of in the future.
- Legal process: If data exists within a provider’s systems, it can be reached by preservation orders or subpoenas. Currently only consumer tiers are under a legal hold, but that definition can become tricky when using a third party company using models through an API.
4) Privilege isn’t always reliable force field
Lawyers enjoy broad privileges when researching and working on behalf of their clients. This often extends the privileges to third parties working on behalf of the attorney, but there is no general rule applying privilege to use of commercial technologies whether they be search or AI chat. Further, even if your chats were considered privileged, that doesn't stop malign actors from pursuing information recklessly placed in public or near public spaces. Up until last month, chats shared to a specific individual through GPT were allowed to be indexed by google and, thus, became public. Even if such a chat could be considered privileged at a later date, that ruling would not remedy the harm.
5) Concrete solutions (shortlist)
- Enterprise chat: Enterprise solutions though most large providers may be compliant for general research tasks. However, lawyers should avoid inputting specific client information without due diligence into the tool's privacy and data retention policies.
- Legal‑specific platforms: Large legal-specific platforms should be complaint for general research tasks, but still may pose risks if using for document production or review that includes client-specific information.
- Local models: Local models maintained by the lawyer or firm are the gold standard for all matters involving
6) What to tell your clients (generally)
- Do not put names, SSNs, health/financial specifics, trade secrets, or live case facts into consumer chatbots.
- If AI could help with research, use an approved or enterprise portal.
- Your AI searches may be the subject of discovery. Coordinate any research with us on relevant topics to preserve privilege where appropriate.