Parliament is advancing an amendment that forces online platforms to remove non-consensual intimate images within a strict timeframe. Under the proposal, firms must act quickly to take down flagged content or face significant penalties.
The deadline set by lawmakers is 48 hours from notification, and regulators could impose fines measured as a portion of worldwide sales—up to 10% of global revenue. In addition, guidance would empower ISPs to block access to sites hosting illegal material that fall outside existing controls.
Victims would be able to submit a single report that covers all platforms, removing the need to contact each service separately. Platforms would also be required to implement measures that stop removed images being re-uploaded.
The move responds to new threats from automated image tools and so-called nudification technologies that amplify harm. High-profile incidents involving AI-generated or altered imagery have pushed ministers to close perceived gaps in the current regulatory framework, including the Online Safety Act’s reach.
Crucially, ministers are also proposing to fold duties for generative chat and multimodal AI agents into the same amendment, giving regulators expedited powers to demand technical mitigations and pre-deployment risk assessments for models whose image features can produce sexualised or non-consensual content. That decision follows independent findings about some systems—including xAI’s Grok—producing explicit or sexualised outputs that safety modes failed to block reliably.
Regulators such as Ofcom would be empowered to require audit trails, demonstrable ‘reasonable steps’, and faster remedies than currently available under the Online Safety Act, and could order platform changes or blocking where necessary. Policy options being discussed include tighter controls on sharing explicit images, limits on feature designs that amplify exposure, and stricter age-assurance for younger users.
Parliamentary research recorded an approximately 20.9% rise in reports of intimate-image abuse during 2024, a statistic lawmakers used to justify urgency. If enacted, smaller platforms will need to upgrade detection and takedown workflows quickly; larger firms will face intensified enforcement and potential service disruption in the UK if they do not comply.
Civil society groups welcomed the shift in responsibility toward platforms but warned that practical challenges—like privacy-preserving age verification, geofencing limits and the risk of over-removal by automated systems—will require careful design and oversight. Lawmakers and regulators say the amendment is intended to create faster legal pathways to prevent foreseeable harms from multimodal agents without repeatedly reworking primary statute.
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.

New Delhi has shortened the legal deadline for social platforms to remove unlawful material from 36 hours to three hours under amended IT rules, forcing faster moderation and greater operational strain on firms such as Meta and X. The move fits into a broader policy push that could produce a patchwork of aggressive, technically difficult measures — from compressed takedown timelines to potential age‑verification experiments — raising costs, privacy risks and legal friction for platforms.

The UK government will amend the Crime and Policing Bill to bind AI conversational agents to duties in the Online Safety Act , creating enforceable obligations and penalties for failing to prevent illegal content. The move, prompted by recent product-testing and regulatory probes into services such as xAI’s Grok, equips regulators to impose faster child-safety measures including a proposed minimum social media age and limits on attention‑maximising features.

Australia’s government publicly condemned large technology platforms for failing to stop the spread of child sexual abuse content, pressing for faster detection, clearer reporting and stronger enforcement. Officials signalled tougher oversight and potential regulatory steps that would force platforms to change moderation practices and cooperation with law enforcement.

Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.
Parent company Aylo will make Pornhub inaccessible to new UK visitors after February 2 unless they complete the required age checks; previously verified accounts remain functional. Aylo warns that restrictive verification rules may push users toward unregulated sites and raises privacy and enforcement concerns.
The European Commission has launched a formal investigation into X’s deployment of the Grok AI model to determine whether it allowed the creation or spread of sexually explicit synthetic images, including material that may meet the threshold for child sexual abuse images. The probe follows reporting and parallel legal and regulatory action in multiple jurisdictions — including a lawsuit from a woman alleging non-consensual sexualized images, national blocks on the service, and inquiries from UK, French and U.S. authorities — and will test X’s risk controls under the Digital Services Act.

Public and political pressure across Europe, parts of the US, and other democracies is pushing social platforms to rethink how products interact with minors, prompting proposals from parental-consent frameworks to explicit age gates. Technical, legal and behavioural hurdles — from verification limits to circumvention and privacy risks — mean the result will be a fragmented set of rules, experiments and litigation rather than a single global solution.

An industry watchdog located dozens of AI-powered apps in Apple and Google app stores that convert ordinary photos into sexualized images, prompting staggered removals, suspensions and conflicting counts from stakeholders. The episode dovetails with separate regulatory scrutiny of large generative systems — including an EU inquiry into xAI’s Grok and nonprofit findings that flagged weak age and safety controls — underscoring rising demands for pre-deployment risk assessments, stronger store admission controls and cross-border data safeguards.