
Spain orders prosecutors to probe X, Meta and TikTok over AI-generated child sexual abuse material
Spain has directed prosecutors to launch criminal inquiries into X, Meta and TikTok over allegations that the services hosted sexually exploitative imagery of minors created with generative artificial intelligence. Madrid framed the prosecutions as part of a broader domestic agenda to curb online harms — including a proposed law to require parental permission before users under 16 can access mainstream social platforms.
Officials highlighted how synthetic‑media outputs complicate automated detection: many generative systems produce photorealistic or near‑photorealistic images that evade signature‑based filters and amplify the workload and error trade‑offs for classifiers. That technical reality increases reliance on combined strategies — automated detectors, provenance metadata, larger human‑review pipelines and faster notice‑and‑action procedures.
The Spanish move arrives alongside related international activity: the European Commission has opened a formal inquiry into X’s generative model Grok to assess pre‑deployment risk assessments and mitigation measures, and French prosecutors recently executed searches at X’s Paris offices in a separate probe touching on automated recommendation systems and synthesized imagery. Those parallel actions create the prospect of coordinated evidence‑gathering and overlapping legal obligations across jurisdictions.
Recent reporting and litigation have intensified scrutiny: xAI has said it narrowed Grok’s image‑generation capabilities after tests showed sexually explicit and non‑consensual outputs could still be prompted, and a civil lawsuit claims Grok produced explicit depictions of an identifiable woman. Regulators in the UK, France and several EU member states — and at least one U.S. state office — have opened inquiries or issued warnings, producing a patchwork of enforcement that platforms must navigate.
Operationally, companies named in the Spanish directive now face heightened legal and reputational risk. Expected responses include accelerated investment in synthetic‑content detectors and provenance signals, more rigorous model documentation and audit trails, larger moderation teams, and careful public communications to limit escalation and potential cross‑border enforcement actions.
Possible outcomes range from orders to fix technical shortcomings and injunctions to fines under the EU Digital Services Act or domestic criminal findings. The mix of criminal probes and regulatory reviews increases the chance of urgent compliance demands, including evidence preservation, disclosure to prosecutors, and coordinated takedown requests across cloud hosts and jurisdictions.
For policymakers and child‑safety advocates the move signals that synthetic child‑abuse imagery is shifting from a moderation challenge to a prosecutable harm, likely prompting calls for mandatory technical standards, clearer age‑verification rules and faster enforcement pathways. For platforms, the trade‑offs are stark: stronger age checks and provenance systems impose privacy and operational costs and may push younger users toward circumvention tools or niche services.
Watch for accelerated regulatory coordination in Europe, more detailed expectations about pre‑deployment testing of generative models, and potential litigation that seeks to define the line between platform liability and developer responsibility for high‑risk AI outputs. How platforms balance detect‑and‑prevent measures with user privacy and product design will shape the near‑term industry response.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
European Commission Opens Probe of X’s Grok Over AI-Generated Sexual Imagery and Possible CSAM
The European Commission has launched a formal investigation into X’s deployment of the Grok AI model to determine whether it allowed the creation or spread of sexually explicit synthetic images, including material that may meet the threshold for child sexual abuse images. The probe follows reporting and parallel legal and regulatory action in multiple jurisdictions — including a lawsuit from a woman alleging non-consensual sexualized images, national blocks on the service, and inquiries from UK, French and U.S. authorities — and will test X’s risk controls under the Digital Services Act.

Spain proposes ban on social media use by under-16s as part of child-safety overhaul
Spain’s government has proposed legislation to bar children under 16 from using mainstream social networks without parental authorization, aiming to reduce exposure to harmful content. The proposal confronts hard enforcement choices — from stronger platform age checks to network‑level steps that risk privacy trade‑offs and circumvention via VPNs — and is likely to prompt legal and technical debate across the EU.




