
X to Rework EU Verification After €120M DSA Penalty
Context and Chronology
EU regulators have levied a €120 million sanction that compels a major platform to change how it signals identity to users in Europe; the company X has submitted a set of corrective measures to the European Commission for assessment. The enforcement action — the culmination of a multi‑year probe — centres on how paid access and verification markers were repurposed in ways that could mislead consumers about authenticity and trust.
The Commission will scrutinize X’s proposed remedies and may accept, reject, or demand modifications; that decision will set the practical compliance requirements for verification badges and paid-status mechanics under the Digital Services Act (DSA). Acceptance is likely to require clearer labeling, tightened eligibility rules, stronger authentication primitives, and technical constraints preventing deceptive status displays.
These regulatory steps take place against a wider enforcement backdrop: separate EU inquiries and national law‑enforcement actions are probing X’s generative AI model (Grok), recommendation systems and automated tools. Brussels has opened a formal DSA investigation into Grok’s pre‑deployment risk assessments and filtering, while French authorities executed searches in Paris as part of an expanding criminal inquiry that includes allegations around ranking controls and certain AI‑generated imagery.
The coexistence of DSA remedies for verification and parallel inquiries into AI and ranking systems amplifies legal and operational stakes for X. Where the DSA process can obligate product fixes and fines, criminal probes could lead to evidence collection, executive interviews and distinct legal exposure — a dual track that complicates both public messaging and technical remediation timelines.
Operationally, product and engineering teams will face a twofold task: separate revenue features from trust signals for verification, and document, test and demonstrate mitigation measures for generative AI and recommendation algorithms. Legal teams must prepare audit trails, pre‑deployment risk assessments and technical evidence to show that mitigation measures operated effectively in practice.
For advertisers and publishers, the ruling and parallel probes increase uncertainty over what constitutes verified inventory and which content‑generation features can be safely monetized in regulated markets. Market participants should expect fragmented national responses, temporary service restrictions for some AI features, and higher compliance costs for cross‑border campaigns.
Product timing and adjacent policy changes add complexity: X’s evolving payments roadmap and recent updates to sponsorship rules (including paid crypto promotions with geofencing requirements) create mismatches between monetization options and available in‑app rails, potentially increasing regulatory friction and audit burden.
Civil suits and national inquiries — including allegations that generative tools produced sexually explicit or harmful images — heighten reputational risk and may prompt faster or more conservative remedial steps than the DSA process alone would demand. X has publicly rejected criminal allegations and defended its practices, while also reporting adjustments to image‑generation capabilities.
Market observers expect rivals to reassess paid‑verification models across Europe; if the Commission accepts remedies that restrict paid status as a trust signal, competing platforms are likely to tighten or roll back similar monetization strategies within months to avoid DSA exposure. Smaller, compliance‑focused entrants may gain a market advantage as incumbents re‑engineer product flows.
Ultimately, the enforcement episode is likely to accelerate industry investment in stronger identity verification, clearer disclosure labels and more robust documentation of AI risk‑mitigation — trading short‑term revenue for longer‑term legal certainty and reduced information harms.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Reddit hit with £14m ICO penalty over age‑verification failings
The UK Information Commissioner's Office has fined Reddit more than £14m for processing children's data without effective age assurance , marking the regulator's largest penalty tied to child privacy. The decision forces platforms to choose between stronger identity checks and privacy-preserving design, and Reddit says it will appeal.

Ofcom Demands Tighter Age Verification from Major Social Platforms
UK regulators Ofcom and the ICO have pressed major social platforms to deploy robust age‑verification measures to block under‑13 registrations, citing high self‑reported child account prevalence and very large suspected‑underage removal figures; firms now face immediate choices between third‑party/device attestations and deeper product redesigns that reshape onboarding and recommendation exposure. The push amplifies privacy, security and market‑structure tensions — from vendor data retention and a recent identity‑image breach to divergent regulatory tools and platform promises about biometric ephemerality.

X reports mass account takedowns after state-backed manipulation campaigns
X says it removed roughly 800M accounts during 2024 to disrupt coordinated, state-linked manipulation and spam. Industry disclosures from AI firms and other platform forensics suggest these campaigns can combine high-volume automated account creation with smaller, human-directed, AI‑assisted operations — complicating attribution and raising calls for cross‑industry telemetry sharing and provenance standards.

French prosecutors raid X’s Paris offices as probe into platform algorithms intensifies
Paris prosecutors, supported by national cybercrime teams and Interpol, searched X’s Paris offices as part of a widening criminal inquiry into the company’s recommendation systems and alleged illicit data access; Elon Musk and Linda Yaccarino have been summoned for voluntary interviews. The enforcement action comes amid parallel regulatory scrutiny in Brussels over Grok, X’s generative AI, and related civil litigation alleging sexually explicit synthetic images were produced without consent.
European Commission Opens Probe of X’s Grok Over AI-Generated Sexual Imagery and Possible CSAM
The European Commission has launched a formal investigation into X’s deployment of the Grok AI model to determine whether it allowed the creation or spread of sexually explicit synthetic images, including material that may meet the threshold for child sexual abuse images. The probe follows reporting and parallel legal and regulatory action in multiple jurisdictions — including a lawsuit from a woman alleging non-consensual sexualized images, national blocks on the service, and inquiries from UK, French and U.S. authorities — and will test X’s risk controls under the Digital Services Act.

EU opens DSA investigation into Shein over illegal sales and addictive design
The European Commission has launched a formal probe of Shein under the Digital Services Act to assess whether the platform allowed illegal goods and used engagement mechanics harmful to users. The review targets product removal systems, the transparency of recommender algorithms, and reward features, with breaches punishable by fines up to 6% of global turnover .

Age-Verification Mandates Force Millions into Mandatory ID Checks
State and platform-level rules are pushing broad age checks that pull ordinary adults into verification flows; vendors and platforms differ on whether inputs are ephemeral, while governments (including Brazil and Apple’s platform controls) accelerate requirements — expect vendor consolidation, single-use age credentials, and intensified legal and privacy battles.

X Allows Paid Crypto Promotions Under Paid Partnership Labels
X has opened paid crypto promotions to creators under a formal paid-partnership label while requiring geoblocks in markets where crypto ads are restricted. The change increases creator monetization but raises compliance and AML exposure, especially as X’s nascent payments layer is slated to roll out fiat-first (via a Visa tie-up) with native crypto rails left for later.