Federal Courts Split on AI and Legal Privilege as Supreme Court Settles the Copyright Question
Two federal courts reached opposite conclusions on whether AI-generated legal materials are protected from discovery, while the Supreme Court declined to extend copyright to AI-authored works, leaving the legal profession to navigate a patchwork of emerging rules with no clear federal framework.
Overview
March 2026 marks a turning point for artificial intelligence in the American legal system. In a single month, the Supreme Court declined to extend copyright protection to AI-generated works, two federal district courts issued contradictory rulings on whether AI-generated legal documents are shielded from discovery, and the American Bar Association convened its largest-ever conference on legal technology. The convergence of these developments exposes a profession caught between rapid AI adoption and a legal framework that has not kept pace.
According to Clio’s Legal Trends Report, 79 percent of legal professionals now use AI tools daily, up from 19 percent just two years earlier. Yet 44 percent of law firms still lack a formal AI governance policy, a gap that the month’s court decisions have made potentially costly.
The Heppner Ruling: AI Chats Are Not Privileged
On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York delivered what legal scholars have called a first-of-its-kind ruling. In United States v. Heppner, a securities and wire fraud prosecution, the defendant Bradley Heppner had used the consumer version of Anthropic’s Claude to prepare approximately 31 documents analyzing his defense strategy after receiving a grand jury subpoena. He later shared these materials with his defense attorneys and claimed they were protected by attorney-client privilege and the work product doctrine.
Judge Rakoff rejected both claims in a bench ruling, followed by a written memorandum on February 17. The court’s reasoning rested on three pillars. First, Claude is not an attorney, holds no law license, owes no fiduciary duty, and is not subject to professional discipline. The court emphasized that attorney-client privilege depends on a trusting human relationship that simply does not exist between a user and a chatbot. Second, Anthropic’s privacy policy permits retaining user inputs, using them for model training, and disclosing them to third parties, destroying any reasonable expectation of confidentiality. Third, Heppner created the documents on his own initiative rather than at his counsel’s direction, defeating his work product claim.
The ruling sent shockwaves through the legal community. As the court pointedly noted, the discussion of legal issues between two non-attorneys is not protected by privilege. But Rakoff left a significant door open: had counsel directed Heppner to use the AI tool, it might have functioned in a manner analogous to a professional agent acting under attorney supervision, potentially preserving protection.
Warner v. Gilbarco: The Opposite Answer
On the same day as Rakoff’s bench ruling, Magistrate Judge Anthony Patti of the Eastern District of Michigan reached the opposite conclusion. In Warner v. Gilbarco, a pro se employment discrimination case, the defendants sought discovery of all documents reflecting the plaintiff’s use of ChatGPT in preparing her litigation materials.
Judge Patti denied the motion, holding that AI-assisted work product qualifies for protection under Rule 26(b)(3)(A) of the Federal Rules of Civil Procedure. The court characterized AI tools as analogous to software rather than as third parties capable of receiving privileged information. Using AI tools for legal preparation, the court found, is analogous to traditional work-product-protected activities, and no cited case orders the production of a litigant’s internal mental impressions reformatted through software.
Critically, the court held that under Sixth Circuit law, disclosing information to ChatGPT did not constitute waiver because such waiver requires disclosure to an adversary or in a manner likely to reach an adversary’s hands. An AI platform, in this framework, is simply incapable of being an adversary.
Why the Courts Diverged
The Heppner and Warner decisions are not technically inconsistent, but their analytical frameworks are fundamentally incompatible. Rakoff treated AI platforms as third-party entities that receive information, collapsing the distinction between attorney-client privilege and work product into a single vulnerability. Patti maintained separate, more protective standards for work product and refused to treat software as a party.
The factual contexts also differed in ways that shaped the outcomes. Heppner was a represented criminal defendant acting without attorney direction. Warner was a pro se civil litigant whose AI use reflected her own litigation strategy. And while Rakoff focused on the AI platform’s terms of service as evidence that confidentiality was destroyed, Patti did not address the terms at all, focusing instead on the nature of the tool itself.
The result is a circuit split before there is even a circuit-level decision. Practitioners in the Second Circuit now face a starkly different landscape than those in the Sixth, with no appellate guidance in either jurisdiction.
The Supreme Court Settles AI Copyright
While the privilege question remains open, the Supreme Court moved to close a different debate. On March 2, 2026, the Court denied certiorari in Thaler v. Perlmutter, No. 25-449, leaving intact the D.C. Circuit’s ruling that works created solely by artificial intelligence without human authorship cannot be copyrighted.
The case had been brought by Stephen Thaler, a Missouri computer scientist who sought to register a visual artwork titled “A Recent Entrance to Paradise” that his autonomous AI system DABUS had generated without human creative input. Both the district court and the D.C. Circuit held that human authorship is a bedrock requirement of the Copyright Act, and the Supreme Court declined to disturb that conclusion.
As Newsweek reported, Thaler stated he expected the outcome but remained disappointed, noting that the decision exposes how deeply our intellectual property system struggles to confront autonomous machine creativity. The federal government had urged denial, arguing that copyright provisions tying protection to an author’s life cannot sensibly be applied to machines.
The practical consequence is that purely AI-generated content now occupies a vast and growing public domain. Businesses can only protect AI-assisted works when humans provide sufficient involvement in the direction, prompting, or alteration of the output.
A Profession Without a Playbook
These rulings arrive as the legal profession’s AI adoption accelerates far ahead of its governance infrastructure. According to Justia’s 50-state survey of attorney ethics rules on AI use, the regulatory landscape remains fragmented. Some judges require explicit certification that AI was used in preparing filings. Florida mandates disclosure on the face of any filing where AI contributed. California, New York, and a growing number of jurisdictions have issued detailed ethical guidance. But there is no uniform federal standard.
The ABA TECHSHOW 2026, running March 25 through 28 in Chicago, will feature nearly 60 panels addressing these challenges. A historic conversation among three ABA presidents is scheduled for March 28 to address the pressures facing the legal system and the rapidly evolving impact of technology on the profession.
Meanwhile, the gap between adoption and governance continues to widen. Clio’s data shows that up to 74 percent of hourly billable tasks could be automated with AI, yet more than half of legal professionals say their firm has no AI policy or that they are unaware of one.
What Comes Next
The Heppner and Warner rulings virtually guarantee that the privilege question will reach the appellate level within the year, and the circuit split makes eventual Supreme Court review plausible. In the meantime, the practical guidance emerging from these decisions is clear: enterprise AI tools with contractual confidentiality protections offer stronger privilege arguments than consumer platforms, attorney direction of AI use matters enormously, and documentation of both is essential.
For a profession that has spent centuries building its framework around confidential human relationships, the challenge is not whether to adopt AI but how to do so without dismantling the legal protections its clients depend on.