
UK moves to force AI chatbots like ChatGPT and Grok to block illegal content under Online Safety Act
The government is extending legal obligations to generative chat interfaces and intends to make non-compliance a punishable offense within weeks, targeting providers such as ChatGPT and Grok. The amendment will graft chatbot duties onto the Online Safety Act via the Crime and Policing Bill, and create expedited powers so ministers and regulators can impose child‑protection measures faster than under current law.
The legislative push follows mounting evidence and official inquiries into real‑world product behaviour. An independent evaluation found xAI’s Grok delivered sexually explicit and otherwise harmful outputs in ways that age‑detection and safety modes failed to block reliably; that assessment, combined with the emergence of an automated image‑generation feature that produced sexualised depictions and was subsequently removed, has already prompted regulatory scrutiny in multiple jurisdictions including a formal inquiry in Brussels. Separate civil litigation over non‑consensual sexualised images has added legal pressure, and xAI has publicly narrowed some image‑generation capabilities while regulators weigh procedural and technical failures.
Under the amendment, enforcement bodies—most notably Ofcom—would be empowered to require technical mitigations, demand audit trails and pre‑deployment risk assessments, and levy fines or other remedies when models serve or facilitate illegal material. Proposed levers include a minimum social‑media age of 16, tighter controls on sharing explicit images, restrictions on attention‑maximising product features such as infinite scroll, and limits on minors’ access to AI agents and privacy‑preserving tools such as VPNs that complicate age assurance.
These rules will force platform and model operators to rework safety pipelines: more robust filtering, stricter prompt constraints, mandatory red‑teaming and adversarial testing, comprehensive logging and incident response, and demonstrable pre‑launch documentation. The Common Sense‑style findings highlight recurring technical gaps—weak age‑assurance, mode‑specific overrides, and inadequate adversarial testing—that regulators are likely to treat as evidence of insufficient “reasonable steps.”
Practical challenges remain: scalable, privacy‑preserving age verification is technically and politically fraught; geofencing and account‑based approaches can be bypassed by shared credentials or VPNs; and per‑market safety settings risk fragmenting services and increasing operational costs. The UK measure sits in a broader, contested international landscape where the EU, member states and other countries are exploring age caps, product‑level constraints and faster enforcement, producing a patchwork of responses that will push multinational vendors toward regionally differentiated configurations or conservative global defaults.
For technology teams, the message is clear: invest in demonstrable governance—independent testing, logging, model documentation and explainability—because statutory duties will hinge on provable due diligence rather than post‑hoc takedowns. For policymakers, the amendment creates a faster legal pathway to address emergent harms from multimodal agents without repeatedly reworking primary statute. For civil society, the change promises stronger protections for minors but intensifies debates about innovation friction, data privacy in age‑verification, and how responsibility is allocated between model creators and hosting platforms. Overall, the amendment accelerates an international trend toward capability‑aware regulation and raises the stakes for vendors to show they can prevent foreseeable harms before deployment.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Spain orders prosecutors to probe X, Meta and TikTok over AI-generated child sexual abuse material
Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.

Independent Review Finds xAI’s Grok Fails to Protect Minors, Spurs Regulatory Alarm
A Common Sense Media review concludes Grok routinely exposes under-18 users to sexual, violent and conspiratorial content while offering weak or bypassable age protections. The findings have already fed cross-border scrutiny — including an EU formal inquiry and a U.S. civil lawsuit alleging nonconsensual explicit image generation — that could trigger enforcement under emerging AI and platform safety rules.

