
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.

Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.

Australia's communications minister has formally asked Roblox to explain how it protects children and requested government testing of the platform's safeguards while urging a review of its PG classification. The move reflects a broader Australian push to convert public criticism of platforms into enforceable oversight and could lead to technical mandates or regulatory sanctions if protections are judged insufficient.

Public and political pressure across Europe, parts of the US, and other democracies is pushing social platforms to rethink how products interact with minors, prompting proposals from parental-consent frameworks to explicit age gates. Technical, legal and behavioural hurdles — from verification limits to circumvention and privacy risks — mean the result will be a fragmented set of rules, experiments and litigation rather than a single global solution.

Meta is defending separate, high‑profile proceedings in New Mexico and California that together probe whether product design choices across Facebook and Instagram exposed minors to predation and addictive use patterns. Plaintiffs plan to rely on thousands of internal documents and behavioral‑science experts while a bipartisan group of U.S. senators is pressing Meta for records after filings suggested safety changes were discussed earlier than their implementation.

Separate state suits and a bellwether Los Angeles trial are using internal documents and executive testimony to challenge how product design and encryption choices affect child safety; lawmakers and international regulators are watching as outcomes could force technical remedies, new disclosure duties, or national policy responses.
Julie Inman Grant has become the public face and enforcement engine behind Australia’s controversial under-16 social media ban, balancing legal fights, platform resistance and intense personal abuse. Her office is implementing the law across major services while preparing for court challenges and shifting attention to regulation of AI and platform design.
A California jury will weigh claims that features in major social apps engineered compulsive use and harmed a young plaintiff’s mental health. The case pits users’ harm allegations against platforms’ legal defenses and could reshape liability rules and product design incentives across the industry.
A UK parliamentary amendment would require online platforms to remove non-consensual intimate images within 48 hours of a victim’s report and to block further re-uploads, with fines up to 10% of global revenue for non-compliance. The proposal also seeks to extend duties to generative AI and chat interfaces and would give regulators (notably Ofcom) expedited powers to demand technical mitigations and pre-deployment risk assessments.