
Deezer pulls payouts on most fully AI tracks and will sell its detection tech
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Spotify credits generative AI for sidelining top engineers’ hands‑on coding since December
Spotify told investors that senior engineers have largely stopped writing routine code since December after deploying an internal generative-AI pipeline (Honk + Claude Code) that generates, tests and surfaces reviewable commits. Management says the system materially accelerated product delivery, but the company — and the industry more broadly — now faces governance, quality-control, workforce and content-moderation challenges as agentic developer tools and platform-level AI detection scale up.
AI-driven content fears trigger a sharp sell-off in media stocks
Worries that rapidly improving AI tools can flood feeds with low-cost audio and video content prompted a steep intraday sell-off across major media and streaming stocks as investors re-priced competitive risk. The move fits a broader, theme-driven market rotation—where algorithmic trading, credit repricing and platform‑level moderation challenges amplify sentiment shifts—and underscored uneven exposure across firms depending on content moats and data advantages.
Grammys’ AI eligibility rule leaves U.S. music industry in limbo
The Recording Academy announced eligibility limits for music that rely heavily on algorithmic generation, but the guidance stops short of defining measurable thresholds. The result is widespread uncertainty for creators, enforcement challenges for the Academy, and early industry counters that already penalize or block AI-originated content.

Studios Move to Block Seedance as Hyper‑Real AI Clips Spread
Major U.S. studios have demanded that ByteDance halt public use of Seedance 2.0 after the tool produced photorealistic short videos that replicate recognisable performers and copyrighted scenes. The episode exposes wider platform and moderation strains as cheap generative tools flood feeds, intensifying calls for provenance, clearer disclosure and cross‑platform standards.

Major music publishers sue Anthropic, seek $3B+ over alleged mass copyright copying
A coalition led by Concord and Universal alleges Anthropic copied and used more than 20,000 copyrighted musical works to train its Claude models and is seeking in excess of $3 billion, relying in part on discovery from prior litigation to show patterns of bulk acquisition. The filing is part of a broader wave of creator and publisher suits testing how AI builders source training data and could force licensing, provenance controls, or injunctive limits on dataset procurement.

Court Papers Reveal Anthropic Bought, Scanned and Destroyed Millions of Books to Train Its AI — And Tried to Keep It Quiet
Newly unsealed court documents show Anthropic acquired and digitized vast numbers of used books to refine its Claude models, then destroyed the physical copies. The disclosures sit alongside separate, expanding litigation and publisher actions — including a multi‑billion music‑publishing complaint and publisher blocks on the Internet Archive — that together signal a widening backlash over how training data is sourced.
YouTubers Add Snap to Growing Wave of Copyright Suits Over AI Training
A coalition of YouTube creators has filed a proposed class action accusing Snap of using their videos to train AI features without permission, alleging the company relied on research-only video-language datasets and sidestepped platform restrictions. The case seeks statutory damages and an injunction and joins a string of recent suits that collectively threaten how firms source audiovisual training material for commercial AI products.

Spain orders prosecutors to probe X, Meta and TikTok over AI-generated child sexual abuse material
Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.