A multinational, expert‑compiled assessment published in 2026 synthesizes evidence about the latest capabilities and hazards associated with advanced general‑purpose AI, finding clear and rapid improvements in tasks such as mathematics, coding and autonomous task execution while stressing that strengths remain inconsistent and brittle in unexpected ways. The assessment links these capability gains to diverse real‑world harms: easier production of high‑fidelity synthetic media and non‑consensual imagery, automated social‑engineering at scale, and the automation and commodification of cybercrime through off‑the‑shelf toolkits and agentic workflows. It highlights widespread operational security failures observed in deployments — exposed administrative interfaces, leaked API keys and chat logs, and agent privilege escalation via prompt injection — that lower the bar for exploitation and create new high‑impact attack surfaces. The report cites concrete incidents that illustrate those pathways, including mass exposures of chat transcripts from a misconfigured children’s‑toy management console, uneven app‑store enforcement against sexualized synthetic‑image apps, and large‑scale AI‑generated child‑safety reports that risk overwhelming investigative capacity without preserved provenance. Crucially, the authors flag biological‑risk pathways: pre‑deployment testing in some cases prompted providers to add stronger safeguards after models could not be ruled out as potentially enabling dangerous biological tasks. The assessment places these technical and operational findings in an economic and governance context: global AI infrastructure spending approached roughly $1.5 trillion in 2025 and is projected to rise markedly, concentrating capability and raising systemic risk, while decentralized alternatives continue to receive only a small fraction of funding. Independent industry and executive surveys complement the report’s evidence, showing that buyers broadly trust AI’s strategic value even as many executives express concerns about national control of suppliers; procurement teams increasingly prefer suppliers that share national ties and demand provenance artifacts (data‑flow maps, dataset lineage and attestations) as a precondition for purchase. Vendors are already responding with technical mitigations — on‑device inference, local compute options, cryptographic attestation and audit‑first deployment patterns — mapping commercial supply to the report’s recommended priorities. Recommended responses are practical and operational: more rigorous pre‑release and in‑situ testing, mandatory security baselines for high‑risk devices and agent frameworks, faster vulnerability‑disclosure and patch cycles, procurement controls for public agencies, and international cooperation on interoperable evaluation and enforcement. The report’s credibility is strengthened by more than 100 independent contributors and an advisory group representing 30+ countries and international organizations, and it was prepared with operational support from a UK‑based secretariat to inform upcoming multilateral discussions. Framed as actionable intelligence rather than a single regulatory blueprint, the assessment aims to steer procurement, vendor practice and engineering choices toward resilience — otherwise it warns, rapid capability gains will outpace safeguards and broaden systemic fragility. Together with market signals about procurement preferences, the authors conclude there is a narrow policy window to set standards that preserve contestability, protect data subjects and align buyer incentives with demonstrable technical safeguards.
PREMIUM ANALYSIS
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Google DeepMind's Demis Hassabis urges urgent research into AI risks
Demis Hassabis told delegates at the AI Impact Summit in New Delhi that accelerated research into the most consequential AI hazards is urgently needed and called for practical, proportionate regulation. The meeting — attended by more than 100 countries and senior industry figures — exposed sharp divisions over centralized global oversight and highlighted India’s push for enforceable procurement, data‑residency and model‑assurance rules amid concerns about concentrated AI infrastructure.