Landmark Jury Verdicts Find Meta and Google Liable for Addictive Platform Design, Opening the Door to Thousands of Pending Lawsuits
Juries in Los Angeles and New Mexico found Meta and Google liable for addictive platform design and child safety failures, awarding $381 million and setting a legal precedent for thousands of pending cases.
Overview
Two jury verdicts delivered in the final week of March 2026 have established the first successful legal theory for holding social media companies liable for the addictive design of their platforms, a development that legal analysts say could fundamentally alter how courts evaluate technology products nationwide.
On March 25, a Los Angeles Superior Court jury found Meta and Google liable for deliberately designing Instagram and YouTube to be addictive, awarding $6 million in damages to a plaintiff identified as K.G.M., who began using the platforms as a child and developed depression and suicidal thoughts. Meta was assigned 70 percent of liability and Google 30 percent. The jury awarded $3 million in compensatory damages and an additional $3 million in punitive damages, according to NPR.
The day before, on March 24, a separate jury in Santa Fe, New Mexico ordered Meta to pay $375 million in civil penalties for violating the state’s consumer protection laws by failing to protect children from predators on its platforms, according to CNBC. New Mexico became the first state to prevail at trial against a major technology company on child safety grounds.
The Product Liability Theory
The legal strategy that proved decisive in the Los Angeles case draws directly from decades of tobacco and pharmaceutical litigation. Rather than challenging user-generated content — which would invoke Section 230 protections shielding platforms from liability for third-party speech — the plaintiffs’ attorneys argued that the platforms themselves are defective products whose design choices cause foreseeable harm.
The presiding judge instructed jurors that how content is delivered differs from what content is, a distinction that limited the applicability of Section 230 defenses, according to The Conversation. This separation allowed the court to examine specific engineering decisions — infinite scroll, notification systems, algorithmic ranking optimized for retention, and autoplay features — as design defects rather than editorial choices about speech.
Plaintiff’s attorney Mark Lanier framed the argument in blunt terms: “These companies built machines designed to addict the brains of children, and they did it on purpose,” as reported by The Conversation.
Internal Evidence
Internal corporate communications proved critical to establishing liability. The trial surfaced evidence from an internal Meta study referred to as “Project Myst,” which according to The Conversation, showed that children experiencing adverse effects from the platforms became the most addicted users — and that Meta executives were aware of this dynamic.
Arturo Bejar, a former Facebook engineering leader, testified that the features at issue “were not subject to any meaningful safety reviews” and that internal safety protections “got whittled down” through corporate processes, leaving them “ineffective at providing safety,” as reported by Scientific American.
Legal expert Rob Nicholls characterized the internal evidence as the turning point, noting that communications comparing platform effects to drugs and gambling demonstrated corporate knowledge of the harm being caused. Nicholls described the verdicts as potentially “big tech’s big tobacco moment,” according to The Conversation.
The New Mexico Case
The New Mexico verdict arose from a 2023 lawsuit filed by Attorney General Raul Torrez following an undercover operation in which investigators created a fake profile of a 13-year-old girl that was rapidly inundated with predatory solicitations on Meta’s platforms. The jury imposed the maximum penalty under New Mexico law of $5,000 per violation, totaling $375 million, according to CNBC.
During the trial, prosecutors revealed internal Meta communications in which employees discussed how CEO Mark Zuckerberg’s 2019 decision to make Facebook Messenger end-to-end encrypted by default would affect the company’s ability to report child sexual abuse material to law enforcement. A second trial phase is scheduled to begin on May 4, where a judge will determine whether Meta created a public nuisance and should fund remedial programs, according to CNBC.
Implications for Design
Research from Carnegie Mellon’s Human-Computer Interaction Institute, cited during the proceedings, demonstrated that removing attention-capture elements from social media platforms reduced daily usage by approximately 21 minutes on average while maintaining user satisfaction, according to Scientific American. Gregory Dickinson, a law professor at the University of Nebraska, compared the current state of social media design to “a slot machine that knew all your favorite games, buzzed in your pocket when your friends started playing and automatically spun the next round unless you opted out,” as quoted by Scientific American.
Possible design modifications that could emerge from continued litigation include making autoplay off by default, reducing notification frequency, implementing less aggressive recommendation systems for younger users, and adding features that encourage breaks rather than continuous engagement.
What We Don’t Know
Both Meta and Google have stated they plan to appeal, and the appellate outcomes remain uncertain. Meta has argued that “teen mental health can’t be linked to a single app,” while Google contends that “YouTube isn’t social media,” as reported by NPR. Whether appellate courts will uphold the distinction between content delivery mechanisms and content itself — the legal theory that bypassed Section 230 — has not been tested at the circuit level.
The broader litigation landscape is vast. Over 1,600 cases are pending in California alone, with more than 10,000 individual cases and 800 school district claims filed nationwide, according to Scientific American. TikTok and Snap settled before the Los Angeles trial reached a verdict, suggesting that some companies may prefer settlements to the risk of jury findings on addictive design. Many observers expect the Supreme Court will ultimately weigh in on whether the product liability theory can survive Section 230 challenges.
The financial exposure for Meta and Google extends well beyond these initial awards. If the product liability framework holds on appeal, it would establish a template for proving that specific design choices — not content — cause measurable harm, potentially exposing every major social media platform to damages claims on a scale comparable to the tobacco settlements of the 1990s.
Analysis
The verdicts represent the first time a jury has accepted the argument that a social media platform’s design, independent of the content it hosts, constitutes a defective product. This is a significant legal development because it routes around the protection that Section 230 has historically provided to technology companies. By treating infinite scroll, autoplay, and algorithmic recommendation as engineering decisions subject to product liability standards — rather than as editorial choices about speech — the plaintiffs established a framework that other litigants can replicate.
The practical question is whether these verdicts will force design changes before the appellate process concludes. The tobacco analogy, while frequently invoked, is instructive: individual lawsuit victories accumulated over years before producing industry-wide behavioral changes, and the decisive shift came through coordinated state attorney general actions rather than any single jury award. The New Mexico verdict, brought by a state attorney general, fits this pattern precisely.
For Meta, the compounding pressure is notable. The company now faces a $375 million state penalty, a $6 million product liability verdict, and the prospect of thousands of additional cases proceeding under the same legal theory. Google’s exposure is narrower but still significant, particularly if courts begin to scrutinize YouTube’s recommendation algorithm and autoplay features under the same product defect standard. The financial stakes are secondary to the structural question: whether social media companies will be required to redesign their core engagement mechanisms, and if so, what that means for the advertising-driven business models that depend on maximizing time spent on platform.