UK Forces AI Chatbots Under Online Safety Act, Threatens Fines of Up to 10 Percent of Global Revenue
Britain closes a legal loophole to bring AI chatbots under the Online Safety Act after Grok's non-consensual image scandal, with penalties reaching 10% of global revenue and a broader consultation on banning social media for under-16s.
Overview
The United Kingdom is moving to close a regulatory gap that has left AI chatbots outside the reach of its primary digital safety legislation. Prime Minister Keir Starmer announced on February 15 that the government will amend the Crime and Policing Bill to force all AI chatbot providers to comply with illegal content duties under the Online Safety Act, according to an official UK government statement. Providers that fail to comply face fines of up to 10 percent of global revenue, and in the most serious cases, courts could block platforms from operating in the UK entirely.
The move comes after Elon Musk’s Grok chatbot on the X platform was found generating non-consensual sexualized images, an incident that exposed the limits of the current regulatory framework.
What Triggered the Crackdown
The Online Safety Act, passed in 2023, was designed to hold social media platforms accountable for harmful content. However, its scope did not explicitly cover one-to-one AI chatbot interactions unless information was shared with other users — a loophole that left systems like Grok, ChatGPT, and other conversational AI tools largely unregulated, as NBC News reported.
The catalyst was Grok’s generation of non-consensual intimate images. Technology Secretary Liz Kendall publicly confronted the issue, stating that she “stood up to Grok and Elon Musk when they flouted British laws and British values,” which led to the removal of that feature, according to the UK government announcement. Ofcom, the UK’s communications regulator, had acknowledged it lacked the authority to regulate tools like Grok under existing law.
What the Amendment Does
The government is tabling an amendment to the Crime and Policing Bill — legislation already before Parliament — to bring AI chatbots within the Online Safety Act’s scope. The key provisions include:
- Illegal content duties: AI chatbot providers must protect users from illegal content, including non-consensual intimate imagery, child sexual abuse material, and content that encourages self-harm or suicide.
- Enforcement teeth: Ofcom gains the power to fine non-compliant providers up to 10 percent of their global revenue. Courts can block platforms from operating in the UK.
- Speed of implementation: By amending existing legislation rather than drafting new laws, the government aims to have the changes take effect within months rather than years.
Starmer framed the urgency in direct terms. “Technology is moving really fast, and the law has got to keep up,” he said, as reported by Al Jazeera.
Broader Child Safety Consultation
The chatbot amendment is part of a wider package of digital safety measures. The government is launching a public consultation in March that will consider:
- Social media age limits: A potential Australian-style ban on social media for children under 16, with Starmer’s office indicating the government wants to act on findings within months, according to NBC News.
- Addictive feature restrictions: New powers to crack down on infinite scrolling and auto-play functionality. Starmer declared: “If that means a fight with the big social media companies, then bring it on,” as The Register reported.
- VPN restrictions for minors: Limitations on children’s ability to use virtual private networks to circumvent age verification systems.
- Data preservation orders: A proposed “Jools’ Law” requiring tech companies to retain deceased children’s device data to support investigations.
- Stranger pairing controls: Powers to curb anonymous stranger-matching features on gaming consoles.
Technology Secretary Kendall said the regulatory gap around AI chatbots would be addressed before June 2026, with the broader consultation measures following on a compressed timeline, as reported by NBC News.
What We Don’t Know
Several critical implementation details remain unresolved. The government has not specified how age verification for AI chatbots will work in practice — a technically challenging problem that digital rights groups have flagged. Critics warn that meaningful age-gating at scale would require mass age verification across large sections of the internet, as The Register noted.
Child protection organizations have raised a separate concern: that overly restrictive measures could push harmful activity into less regulated spaces, or create a sharp “cliff edge” at age 16, as NBC News reported. The specific definitions of “illegal content” as applied to AI-generated outputs — particularly edge cases involving creative or satirical content — also remain to be tested.
Industry response has been muted so far. OpenAI has previously introduced parental controls and age-prediction tools, but no major AI provider has publicly commented on the UK amendment specifically.
International Context
The UK is not acting in isolation. Australia became the first country to prohibit under-16s from social media platforms, and Spain, Greece, and Slovenia are developing similar bans, according to Al Jazeera. France is debating comparable legislation. The EU’s Digital Services Act and AI Act already impose some obligations on AI providers, but the UK’s approach of extending existing content moderation law directly to chatbots represents a distinct regulatory strategy — one that treats conversational AI systems as platforms with content safety obligations rather than as tools.
Both the governing Labour Party and the opposition Conservatives support age restrictions in principle, though they differ on implementation details, giving the measures strong bipartisan backing, as Al Jazeera reported.