The March 11 Deadline: Inside the Federal Government's Bid to Dismantle State AI Regulation
As the Commerce Department's 90-day deadline approaches, a constitutional confrontation over AI governance is taking shape between the White House and state legislatures across the country.
Overview
On March 11, 2026, the U.S. Department of Commerce is expected to publish a consequential report: a comprehensive evaluation identifying state artificial intelligence laws that the federal government considers “overly burdensome” to innovation. The report will serve as a hit list of sorts, flagging state regulations for potential legal challenge by the Department of Justice’s newly created AI Litigation Task Force. Two weeks from the deadline, the confrontation between the White House’s deregulatory ambitions and a nationwide wave of state-level AI governance is approaching its first inflection point.
The stakes extend well beyond administrative procedure. At issue is whether the patchwork of state AI laws enacted across at least a dozen states in 2025 and early 2026 will survive federal scrutiny, and whether the executive branch can effectively override state legislatures without an act of Congress.
The Executive Order That Started It All
On December 11, 2025, President Trump signed Executive Order 14365, formally titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order declared that the United States would “sustain and enhance” its global AI dominance through a “minimally burdensome national policy framework” and directed multiple federal agencies to take action against state regulations deemed inconsistent with that goal.
The order established three enforcement mechanisms. First, it directed the Secretary of Commerce to produce, within 90 days, a report evaluating every existing state AI law and identifying those that “merit referral” to the DOJ for legal challenge. Second, it created the AI Litigation Task Force within the Department of Justice, charged with filing federal lawsuits against state laws on grounds including unconstitutional regulation of interstate commerce and federal preemption. Third, it conditioned approximately $21 billion in non-deployed broadband infrastructure funding under the Broadband Equity, Access and Deployment (BEAD) program on states repealing or declining to enforce AI regulations the administration deems “onerous.” The overall BEAD program totals $42.45 billion, but the executive order specifically targets the remaining non-deployment funds that have been allocated to states but not yet disbursed for construction.
The executive order also directed the Federal Trade Commission to issue a policy statement classifying certain state-mandated AI bias mitigation requirements as potentially deceptive trade practices, and instructed the Federal Communications Commission to consider federal AI disclosure standards within 90 days of the Commerce review.
The DOJ Task Force Takes Shape
On January 9, 2026, Attorney General Pam Bondi formally announced the creation of the AI Litigation Task Force, staffed by representatives from the offices of the Deputy and Associate Attorney General, the DOJ Civil Division, and the Solicitor General’s office. The task force operates in consultation with White House AI and cryptocurrency advisor David Sacks, who helps determine which state laws to challenge first, according to CBS News.
The legal strategy relies primarily on the Dormant Commerce Clause, arguing that a diverse landscape of state regulations creates impermissible barriers to the national deployment of AI systems. The task force also plans to invoke federal preemption arguments and, in some cases, First Amendment challenges to state laws requiring specific disclosures or output modifications from AI systems.
As of late February, the task force has not yet filed its first lawsuits, though legal observers widely expect filings to begin around or shortly after the March 11 Commerce Department report.
Which State Laws Are in the Crosshairs
The executive order’s language offers clear signals about which state regulations the administration views as most problematic. At minimum, the Commerce review must flag state laws that “require AI models to alter truthful outputs” or “compel disclosures or reporting that would violate the First Amendment or other constitutional protections.”
Several major state AI laws took effect on January 1, 2026, and now find themselves in uncertain legal territory:
Colorado’s AI Act (SB 24-205) is widely regarded as the most likely first target. The law requires developers and deployers of high-risk AI systems to exercise “reasonable care” to prevent algorithmic discrimination, defined to include disparate-impact outcomes. Colorado already delayed implementation from February 1 to June 30, 2026, after intense industry lobbying during an August 2025 special legislative session. The executive order’s emphasis on preventing AI systems from producing “false” results appears aimed squarely at Colorado-style anti-discrimination mandates.
California’s Transparency in Frontier AI Act (SB 53) requires developers of frontier models trained with computing power exceeding 10^26 operations to publish safety frameworks addressing catastrophic risks, including biological weapons development and cyberattacks. California also enacted AB 2013, which mandates that generative AI developers publish summaries of their training data, including sources, data types, and whether personal information was used.
Texas’s Responsible AI Governance Act (HB 149) prohibits AI systems used for “restricted purposes” including encouraging self-harm or violence, and empowers the Texas Attorney General to issue investigative demands for system documentation. Violations carry penalties up to $200,000 per incident.
New York’s RAISE Act, signed by Governor Hochul on December 19, 2025, establishes the first state-level reporting and safety governance regime for frontier AI model developers, with a January 1, 2027 effective date. The law creates a new oversight office within the Department of Financial Services with rule-making authority, and allows the Attorney General to bring civil actions with penalties reaching $3 million per violation, as described in the Governor’s announcement.
The $21 Billion Lever
Perhaps the most potent tool in the executive order is not litigation but funding. By conditioning approximately $21 billion in non-deployed BEAD broadband infrastructure money on states’ willingness to repeal AI regulations, the administration has introduced a financial calculus that could prove more immediately effective than years of court battles. These non-deployment funds, part of the broader $42.45 billion BEAD program, were originally intended for purposes such as digital literacy programs and workforce training before being consolidated into a “remaining funds” pool.
For states that have invested significant political capital in passing AI legislation, the choice between maintaining regulatory frameworks and securing federal broadband funding represents a genuine dilemma. Rural states with pressing broadband needs may find the financial pressure particularly difficult to resist, even if their attorneys general are prepared to fight federal lawsuits.
Congressional Pushback
The executive order has drawn bipartisan criticism in Congress. Senator Ed Markey of Massachusetts led ten colleagues in introducing the States’ Right to Regulate AI Act, legislation that would prohibit the administration from implementing the preemption provisions of the executive order, according to a press release from his office. Markey characterized the executive order as “an indefensible and irresponsible power grab” and noted that a “broad, bipartisan coalition in Congress has rejected the AI moratorium again and again.”
The legislative response highlights a constitutional tension: the executive order attempts to achieve through administrative action what Congress has repeatedly declined to do through legislation. The National Defense Authorization Act for Fiscal Year 2026 notably excluded language that would have preempted state AI regulation, suggesting limited congressional appetite for a federal override.
What We Don’t Know
Several critical uncertainties remain as the March 11 deadline approaches.
The Commerce Department’s methodology for evaluating state laws has not been disclosed. It is unclear whether the report will provide detailed legal analyses of individual statutes or issue broader categorical judgments about types of regulation the administration opposes.
The DOJ task force’s litigation strategy remains opaque. While the Dormant Commerce Clause provides a plausible theory, courts have historically given states significant latitude in regulating consumer protection and workplace safety. Whether federal judges will view AI regulation through the lens of technology innovation or consumer protection could determine outcomes.
The executive order’s carve-outs complicate the picture further. The order expressly preserves state authority over child safety protections, AI compute and data center infrastructure permitting, and state government procurement. Where those carve-outs end and the preemption provisions begin is likely to generate substantial litigation of its own.
Finally, the political durability of the executive order is uncertain. Executive orders can be rescinded by future administrations, and ongoing congressional opposition could result in legislative riders or appropriations restrictions that limit enforcement.
Analysis
The approaching March 11 deadline represents something more significant than a single administrative report. It marks the beginning of what legal scholars are already calling a potential constitutional confrontation over technology governance.
The administration’s strategy combines legal, financial, and administrative pressure in a way that is designed to reshape state AI policy without requiring congressional approval. Even if the DOJ loses early court challenges, the uncertainty created by active federal litigation could have a chilling effect on state legislatures considering new AI bills. As of early February, approximately 300 AI-related bills were being tracked across state legislatures, and the federal government’s posture could slow that momentum considerably. The IAPP’s AI legislation tracker continues to document the breadth of state-level legislative activity.
Yet there are reasons to question whether the preemption strategy will succeed. Executive orders cannot unilaterally override state legislation; that requires either an act of Congress or a court ruling. The Dormant Commerce Clause theory, while creative, faces skepticism from legal analysts who note that state consumer protection and anti-discrimination laws have traditionally survived such challenges. And the BEAD funding condition may face its own legal vulnerabilities if states argue it constitutes unconstitutional coercion.
The absence of comprehensive federal AI legislation means the nation is navigating a genuinely novel regulatory landscape. States have filled the vacuum left by congressional inaction, and the executive branch is now attempting to claw back that authority through means that may or may not survive judicial review.
What happens in the next few weeks will set the tone for AI governance in the United States for years to come. The Commerce Department report will reveal the administration’s priorities. The DOJ’s first lawsuits will test whether its legal theories hold up in court. And state legislatures will decide whether to stand their ground or bend under federal pressure. The outcome will determine whether the United States develops a unified national framework for AI oversight, or continues down the path of fragmented, state-by-state governance.