Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.

Meta is defending separate, high‑profile proceedings in New Mexico and California that together probe whether product design choices across Facebook and Instagram exposed minors to predation and addictive use patterns. Plaintiffs plan to rely on thousands of internal documents and behavioral‑science experts while a bipartisan group of U.S. senators is pressing Meta for records after filings suggested safety changes were discussed earlier than their implementation.

Separate state suits and a bellwether Los Angeles trial are using internal documents and executive testimony to challenge how product design and encryption choices affect child safety; lawmakers and international regulators are watching as outcomes could force technical remedies, new disclosure duties, or national policy responses.

Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.

Public and political pressure across Europe, parts of the US, and other democracies is pushing social platforms to rethink how products interact with minors, prompting proposals from parental-consent frameworks to explicit age gates. Technical, legal and behavioural hurdles — from verification limits to circumvention and privacy risks — mean the result will be a fragmented set of rules, experiments and litigation rather than a single global solution.

Australia’s government publicly condemned large technology platforms for failing to stop the spread of child sexual abuse content, pressing for faster detection, clearer reporting and stronger enforcement. Officials signalled tougher oversight and potential regulatory steps that would force platforms to change moderation practices and cooperation with law enforcement.

Three Democratic senators asked the Department of Justice and Federal Trade Commission to review recent talent-focused deals by major tech firms that sidestep full acquisitions, arguing they can concentrate personnel and know‑how and harm competition. The letter highlights multimillion- and multibillion-dollar transactions tied to Meta, Google and Nvidia and urges regulators to block or unwind arrangements that violate antitrust law.

The European Commission has formally accused Meta of abusing dominance by restricting third‑party AI chat services on WhatsApp and is preparing temporary measures to keep rivals accessible while it investigates. The move comes amid related national actions — including an Italian arrangement that lets third‑party bots run on WhatsApp Business API for a fee — and follows broader regulatory pressure globally on how messaging platforms manage AI and data flows.
Sen. Elizabeth Warren has asked Google CEO Sundar Pichai for a detailed explanation of what user signals will be shared with retailers after Google announced a checkout feature for its Gemini chatbot, warning that combining conversational context, search history and merchant data could steer purchases and create opaque preferential treatment. The inquiry comes as reported commercial deals and investor scrutiny over Gemini’s licensing and cloud ties raise the stakes for how data, compute and revenue flows are governed.