FasterOutcomes Thought Leadership Series: Insights from our EVP of Sales, Tim Sawyer.
The Ethical AI Dilemma Facing Law Firms:
As I travel across the country speaking with attorneys—from CLE panels to firm workshops—I hear the same question again and again:
“Is it okay to upload medical records into ChatGPT if I’m just trying to speed things up?”
It’s an understandable question. Legal professionals are under growing pressure to do more with less: move faster, manage higher caseloads, and deliver better results.
This is especially true in practice areas like personal injury, family law, and healthcare law, where medical records, chronologies, and treatment summaries are critical—but time-consuming—to process.
It’s no surprise that some attorneys turn to tools like ChatGPT, Notebook LM, or Grok. They’re fast, accessible, and promise immediate insights.
But here’s the catch: uploading sensitive client information—especially anything related to health data—into a public AI platform could expose you to serious ethical, legal, and professional risks.
What Counts as Protected Health Information (PHI)?
Protected Health Information (PHI) includes any details about a client’s medical history, diagnoses, treatment, or providers—especially when tied to a name, case number, or other identifying information. Under HIPAA, this data is highly regulated.
Even though most attorneys aren’t classified as “covered entities” under HIPAA, sharing medical records with providers, storing them electronically, or using them in litigation may classify you as a business associate.
That brings serious obligations:
- ✅ End-to-end encryption
- ✅ Controlled access
- ✅ Secure data storage
- ✅ Disclosure and breach protocols
- ✅ Full audit trails
Most open AI tools simply don’t offer these safeguards—leaving you exposed.
Why SOC 2 Compliance Matters
SOC 2 Type II is a gold standard audit that assesses how a service provider handles data across key areas like security, confidentiality, and privacy.
If the AI platform you’re using hasn’t passed a SOC 2 audit, you simply can’t be sure:
-
Where your data is stored
-
Who has access to it
-
Whether it can be deleted or traced
-
If it meets your ethical responsibilities under Rule 1.6
When handling client health information, “I didn’t know” won’t protect you.
Real-World Examples: When AI Goes Wrong
Attorneys have already faced real consequences for misusing AI tools:
-
Mata v. Avianca – Lawyers submitted a brief with fake citations created by ChatGPT. The court fined and sanctioned them.
-
2025 Utah Case – A lawyer cited a fabricated case (“Royer v. Nelson”) generated by ChatGPT, violating Rule 3.3.
-
Court Warnings – Several courts now warn that AI-generated content used without review could constitute malpractice or ethics violations.
Even private use—without consent or oversight—can jeopardize your license and your reputation.
“Premium” ≠ “Private”
A common misconception: upgrading to ChatGPT Plus makes the platform safe for confidential legal material.
That’s just not true.
Let’s compare:
| Feature | Open AI Tools (ChatGPT, Notebook LM, etc.) |
|---|---|
| HIPAA-Compliant | ❌ No |
| SOC 2 Certified | ❌ No |
| Encryption | ❓ Limited or unclear |
| Data Retention | ❗ May retain & use inputs |
| Audit Trails | ❌ None |
These tools are designed for speed—not confidentiality. That difference could cost you everything.
A Wake-Up Call in the Field
At a recent CLE in Dallas, a family law attorney shared this:
“I pasted in all my handwritten notes and medical docs to help summarize a prenup. It worked fast, but then I realized—what if that data is stored somewhere? What if it leaks?”
That moment of realization? That’s the kind of wake-up call more attorneys need.
The Bottom Line
Attorneys are right to seek efficiency. Medical chronologies and case prep are massive time sinks.
But using open, general-purpose AI to process client health data is a risk you don’t need to take.
Before uploading anything into an AI tool, ask yourself:
-
Is this tool built to handle PHI?
-
Does it meet my ethical obligations?
-
Can I explain how data is stored, accessed, and protected?
If your answer is “I’m not sure,” it’s time to pause and reassess.
Because no shortcut is worth your license—or your client’s trust.
Sources:Lawyers Using AI Keep Citing Fake Cases in Court. Judges Aren’t Happy – Washington Post
Utah Lawyer Sanctioned After Citing AI-Fabricated Case “Royer v. Nelson” – The Guardian
AI Hallucinations in Court Filings Spell Trouble for Lawyers – Reuters
Stay ahead with AI-driven Legal Innovation. Read more Smarter Insights for Faster Outcomes by staying tuned into FasterOutcomes blog. Connect with us on Linkedin Follow us on Instagram Subscribe to our Monthly Newsletter


