🛡️ ShieldDrop
Legal EthicsAI ToolsBar Rules

AI in Legal Practice: What Bar Rules Actually Say About Using ChatGPT in Your Cases

May 3, 2026 · 9 min read · By ShieldDrop Legal Research Team

Attorneys across every practice area are using AI. Some are winning with it. Others are getting sanctioned. The difference almost always comes down to whether they understand three ABA Model Rules — and one critical distinction that bar ethics committees keep emphasizing.

Rule 1.1: Competence Now Includes Technology

ABA Model Rule 1.1 has always required attorneys to provide competent representation. In 2012, the ABA amended Comment 8 to add that competence includes keeping abreast of "changes in the law and its practice, including the benefits and risks associated with relevant technology."

Twenty-six states have adopted some version of this language. In those jurisdictions, failing to understand how AI tools work — their hallucination risks, data retention policies, and output limitations — could itself constitute a competence violation.

Practical implication: You don't have to use AI. But you do have to understand it well enough to make an informed decision about whether it's appropriate for a given task — and to supervise its output if you do use it.

Rule 1.6: Confidentiality Is the Big One

ABA Model Rule 1.6(a) prohibits attorneys from revealing information relating to the representation of a client unless the client gives informed consent. Rule 1.6(c) further requires attorneys to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure" of such information.

When you paste client facts into a general-purpose AI tool like ChatGPT or Claude, you are transmitting that information to a third-party server. Depending on the platform and its data retention settings:

  • Your input may be retained for model training (if you haven't opted out)
  • It may be reviewed by human contractors as part of AI safety review processes
  • It may be accessible via future legal process served on the AI provider
  • It may be included in a data breach if the provider's infrastructure is compromised

New York City Bar Association Formal Opinion 2024-5 addressed this directly: attorneys must evaluate each specific AI tool against Rule 1.6, including reviewing the provider's terms of service and data processing agreements.

Practical implication: Use enterprise or API-mode AI tools that don't train on your inputs, or sanitize case materials before pasting (replace names with pseudonyms, remove case numbers). Never paste verbatim privileged communications into a consumer AI product.

Rule 5.3: You Are Responsible for AI Conduct

Rule 5.3 requires attorneys to supervise non-attorney assistants — and multiple bar opinions have now concluded that AI tools fall within this supervision obligation. The attorney using the AI is responsible for the output, including any errors, hallucinations, or fabricated citations.

This has already produced sanctions. In Mata v. Avianca (SDNY, 2023), the court sanctioned attorneys who filed a brief citing six nonexistent cases generated by ChatGPT, finding that their failure to verify the citations was a violation of Rule 11. The court specifically noted that "the duty to review" extends to AI-generated content.

⚠️ Critical rule: Every citation generated by any AI tool must be independently verified in Westlaw, Lexis, or a reliable primary source before filing. Treat AI legal research as a starting point, not a finished product.

The Safe/Unsafe Line: A Practical Framework

Based on the current state of bar opinions across multiple jurisdictions, here's a working framework for categorizing AI use:

✓ Generally Safe
  • Drafting templates with no client data
  • Research starting points (verified independently)
  • Organizing your own notes and thinking
  • Generating cross-exam question frameworks from sanitized facts
  • AI tools with zero data retention and API-mode processing
  • Summarizing public court records
✗ Proceed With Caution
  • Pasting real client names and facts
  • Submitting AI-generated citations without verification
  • Using consumer AI tools (free ChatGPT, free Claude)
  • Transcribing confidential calls with tools that retain audio
  • Filing AI-drafted documents without attorney review
  • Using AI for jurisdiction-specific procedural guidance

Key State-Specific Guidance

Several state bars have issued formal opinions that go beyond the ABA model rules:

FloridaFlorida Bar Ethics Opinion 24-1 (2024): Attorneys must assess AI data retention before use. Client consent may be required before using AI tools that retain data.
CaliforniaState Bar of California Practical Guidance on AI (2023): Emphasizes that generative AI 'hallucinations' create Rule 3.3 candor risks when AI output is filed without verification.
New YorkNYC Bar Op. 2024-5: Specific disclosure obligations when AI is used to draft client-facing communications. Discusses engagement letter language.
TexasTexas Center for Legal Ethics: AI use in discovery is governed by Tex. R. Civ. P. 196.4 for electronically stored information — AI-assisted review must meet same standards as manual review.

How ShieldDrop Is Built for Bar Compliance

ShieldDrop AI tools (CaseBrief, TrialMind, LexAI, VaultDictate) are designed with Rule 1.6 compliance in mind:

  • Zero retention: Case materials, transcripts, and AI inputs are processed in RAM and discarded immediately — never written to disk or stored.
  • No training on your data: API-mode processing means your inputs are never used to train any model.
  • Explicit AI disclaimer on every output: Every analysis includes a disclaimer reminding attorneys to verify all citations and legal conclusions independently.
  • Pseudonym-first design: We recommend and document the practice of using pseudonyms in AI tools — reducing the Rule 1.6 exposure of even the worst-case scenario.
Explore the Suite →
Disclaimer: This article is for general informational purposes only and does not constitute legal advice or an ethics opinion. Bar rules vary by jurisdiction. Consult your state bar's ethics hotline or a legal ethics specialist before relying on any AI tool in client matters.