
X reports mass account takedowns after state-backed manipulation campaigns
Context and Chronology
X disclosed an unprecedented moderation surge, saying its teams foiled and removed roughly 800M accounts over 2024 in operations aimed at undermining coordinated manipulation. The claim was presented to UK parliamentary officials during a briefing led by Wifredo Fernández, who warned that inauthentic networks continue to be created daily. X framed the effort as an active defence against state-linked campaigns that amplified divisive narratives and said large-scale suspensions had also occurred in prior cycles (described as additional suspensions in the "hundreds of millions").
Attribution, Tradecraft and AI Augmentation
According to X, the largest share of identified networks traced back to actors tied to RU, with follow-on activity attributed to entities associated with CN and IR. X characterised the activity as coordinated platform manipulation rather than isolated spam bursts, noting automated and semi-automated account creation patterns. Complementary disclosures from AI firms and platform investigators show a parallel threat vector: adversaries can use model-extraction, automated tooling and human operators to produce high‑quality, tailored content, forged documents, and coordination artifacts that accelerate targeted suppression and amplification campaigns.
Reconciling Forensic Differences
Public accounts of these operations differ in their forensic basis — for example, some firms link internal chat logs and platform posts to suppression campaigns, while others cite telemetry aggregates and exchange volumes used to allege model‑extraction abuse. These are complementary, not mutually exclusive: volumetric network creation (the bulk suspensions X describes) can coexist with smaller, human-in-the-loop influence operations that use AI outputs to refine messaging and fabricate supporting documents. The differing traces explain divergence in scope and evidence without negating either class of finding.
Scale, Platform Impact and Cross‑Platform Dynamics
To contextualize the removals, external estimates put X’s registered user base in the low hundreds of millions, which means the takedowns touched a material portion of visible accounts and content flows. Removing large networks changes signal-to-noise ratios and content discovery algorithms, but it does not eliminate lower-volume, high-impact campaigns that can be amplified across platforms. Industry forensics indicate many operations couple automated account farms with thousands of human operators and cross-platform amplification — a hybrid model that is harder to detect by volume alone.
Why this matters, now — Policy and Operational Consequences
The disclosure arrives amid growing regulatory pressure to protect civic processes and curb misuse of AI and platform features. The convergence of better bulk-detection tooling, model-extraction risks, and election cycles has prompted calls for cross‑industry telemetry sharing, watermarking of AI outputs, per-account attestation, and legal frameworks to enable faster attribution and coordinated takedowns. Observers should see X’s disclosure as part of a broader ecosystem response: large-volume removals reduce some risks but increase the strategic importance of provenance, inter‑platform cooperation, and targeted detection methods.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Russia's synthetic-video campaigns accelerate disinformation reach
Russia-linked networks have weaponised inexpensive, hyperreal synthetic videos to amplify anti-Western narratives; the rapid spread of clips has cost platforms trust and forced regulators to consider faster takedown and provenance rules. Primary actors include OpenAI toolchains, second-tier generative apps, and organised Kremlin-aligned units.

X’s Premium Subscriptions Appeared to Amplify Iranian State Messaging
A new investigation found that several Iranian government and state-affiliated accounts on X had blue verification marks tied to paid subscriptions while domestic internet access was restricted, raising questions about sanctions compliance and platform moderation. The disclosures came as some checks were removed after media scrutiny, spotlighting legal, reputational, and geopolitical risks for X and its owner.

OpenAI: ChatGPT record exposes transnational suppression network
OpenAI released internal records showing a coordinated campaign using ChatGPT entries to run harassment and takedown operations against overseas critics. The disclosure links a large actor network — involving hundreds of operators and thousands of fake accounts — to real-world misinformation and platform abuse, sharpening regulatory and security pressures.

X to Rework EU Verification After €120M DSA Penalty
X will overhaul its European verification model after a €120 million sanction from EU regulators under the Digital Services Act, prompting platform governance changes and higher compliance costs. The European Commission will evaluate X’s proposed remedies, a decision likely to reshape verification practices across major social networks.

Japan Government Condemns China-Linked Influence Operation After OpenAI Report
OpenAI notified authorities after tracing in‑app chat records and cross‑platform activity tied to a campaign targeting Japan’s prime minister; Tokyo publicly condemned the China‑linked operation and pressed for immediate countermeasures, sharpening debates over platform disclosure, forensic standards, and the risk that private detection triggers diplomatic fallout.

X Tightens Creator Monetization for Undisclosed AI War Videos
X will suspend creators from revenue sharing for 90 days if they publish AI‑generated armed‑conflict footage without clear disclosure. The platform links disclosure to monetization eligibility and will act on Community Notes flags, metadata signals, and other generative‑AI indicators — but the company has offered few public details about detection thresholds or an appeals process, raising risks of misclassification and calls for transparent provenance standards.

Binance’s on‑chain reserves remain stable as coordinated account-deletion posts stir reputation risk
CryptoQuant’s on‑chain snapshot shows Binance’s Bitcoin reserves holding near 659,000 BTC, undermining social‑media claims of mass withdrawals. Still, a cluster of near‑identical X posts urging account closures — amplified by prominent figures and vendors — exposed how coordinated messaging can create acute reputational and liquidity‑management pressure even absent ledger outflows.

Russia delists WhatsApp from regulator directory, accelerating shift toward state-backed messenger
Russian regulators have removed Meta-owned WhatsApp from the official regulator directory, a move that narrows the app’s official standing and is likely to precede technical restrictions that push users toward the state‑backed MAX service. The step fits a broader pattern of regulator tactics — from throttling to legal reclassification in other markets — that collectively increase compliance burdens and operational risk for Meta.