
Google DeepMind's Demis Hassabis urges urgent research into AI risks
AI summit pushes safety research to the top of the agenda
At the international AI Impact Summit in New Delhi, Demis Hassabis made a concentrated appeal for expanded scientific work focused on the highest‑consequence risks from advanced artificial intelligence.
The meeting brought senior figures from more than a hundred countries alongside major industry executives and researchers — including names such as Sundar Pichai, Sam Altman, Dario Amodei, Yann LeCun and Arthur Mensch — underscoring commercial as well as diplomatic stakes.
Speakers identified two practical threat vectors that should guide immediate research priorities: malicious use by hostile actors exploiting improved generative capabilities, and autonomous or agentic systems operating beyond reliable human oversight.
New Delhi used the forum to press for concrete instruments — procurement conditions, compute‑scaling plans, data‑residency requirements and formal safety‑verification regimes — arguing that market access and buying power can steer vendor behaviour.
OpenAI representatives at the summit highlighted the platform’s rapid adoption in India — estimated at roughly 100 million weekly ChatGPT users — and recent pricing and access moves that give regulators leverage when negotiating conditional market terms.
A number of delegates, including Mistral’s chief executive Arthur Mensch, linked technical vulnerability to market structure, warning that concentrated control over core tooling and distribution creates gatekeeping incentives and systemic fragility.
Those commercial and security concerns were reinforced by a recent multinational assessment cited in side sessions, which documented rapid capability gains in coding, mathematics and task automation alongside brittle failures, operational security lapses and real incidents that lower the bar for large‑scale abuse.
The scale of global infrastructure investment — estimated around $1.5 trillion in 2025 and projected to grow substantially — was repeatedly referenced as a driver of concentration that could outpace regulatory responses if left unchecked.
Political divisions surfaced quickly: many delegations sought a coordinated communiqué or shared technical standards, but the US representatives signalled resistance to top‑down global governance, preferring lighter‑touch or national approaches to regulation.
Operational outcomes under discussion included mandatory pre‑release testing and adversarial red‑teaming, interoperable evaluation frameworks, mandatory provenance and audit trails for high‑risk systems, and procurement rules that favour auditable, non‑exclusive model access.
Industry responses presented at the summit ranged from proposals for local hosting partnerships and on‑device inference to cryptographic attestation, dataset lineage requirements and audit‑first deployment patterns intended to meet buyer and regulator expectations.
Education and workforce themes ran alongside regulation: speakers argued technical training and human judgement will remain competitive advantages as automation changes routine work, and that public procurement in education should guard against pedagogical risks.
- Operational outcome expected: a shared communiqué is likely when the summit closes, though its binding power and enforcement mechanisms remain uncertain.
- Coordination gap: US resistance to centralized oversight raises the prospect of fragmented, regional rule sets rather than a single global pact.
- Research and market emphasis: more funding and collaborative projects for adversarial testing, control techniques, and interoperable safety tooling are now likely priorities, alongside procurement‑driven vendor concessions.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
UK-backed International AI Safety Report 2026 Signals Fast Capability Gains and Growing Risks
A UK‑hosted, expert-led 2026 assessment documents rapid, uneven advances in general‑purpose AI alongside concrete misuse vectors and operational failures, and — reinforced by industry surveys — warns that procurement nationalism and buyer demand for provenance are already shaping markets. The report urges urgent, coordinated policy and technical responses (stronger pre‑release testing, mandatory security baselines, procurement safeguards and interoperable standards) to prevent capability growth from outpacing defenses.

India AI Impact Summit Draws Global Tech Chiefs to Shape Frontier Models
India is hosting a major AI summit in New Delhi that assembles senior executives and top researchers to influence how frontier models are developed and governed. OpenAI told officials and attendees it now sees roughly 100 million weekly ChatGPT users in India and has rolled out low‑cost and limited free access plans, underscoring the market leverage New Delhi is using to press for compute residency, safety and education partnerships.

