
Google DeepMind restricts Antigravity access, cutting OpenClaw integrations
Antigravity access restricted — ecosystem consequences
On February 23, Google DeepMind moved to disable third-party integrations that routed requests to its Antigravity runtime after detecting large-scale, abusive traffic patterns that degraded service for paying customers.
The cut affected developers linking the open-source agent OpenClaw to Antigravity and, in some cases, caused temporary account lockouts for individuals who had connected those agents to core Google accounts.
Google framed the intervention as capacity and misuse control — throttling a specific access pathway that multiplied requests for Gemini tokens through intermediary platforms — rather than a blanket ban on external integrations.
The timing compounds geopolitical product positioning: weeks after OpenClaw’s lead joined a rival lab, this enforcement severs a convenient bridge between an OpenAI-adjacent toolset and Google’s frontier models.
Developers and users reported disruption to workflows that relied on agentic orchestration, demonstrating how quickly a provider-side policy decision can ripple into identity and productivity friction.
Technically, the episode exposes tension between open-agent extensibility and a provider’s need to protect runtime quality for paid tiers, including high-cost subscriptions such as the $250/month Ultra level.
This is not isolated: other model vendors have introduced fingerprinting and rate controls to block third-party wrappers, signaling a broader industry move toward controlled access enclaves.
For enterprises, the incident highlights two trade-offs: lower friction when using managed model stacks versus the operational risk of third-party agent dependencies tied to core identity providers.
OpenClaw’s maintainers signaled a formal removal of Google-specific support, increasing the odds that users will migrate to either vendor-aligned agent platforms or fully self-hosted stacks.
Operationally, teams that baked Antigravity into business logic now face remediation choices: re-architect to run agents inside VPCs, negotiate direct API contracts, or accept intermittent access constraints.
Strategically, the event crystallizes a shift toward vertically integrated agent ecosystems where telemetry, subscription revenue, and guardrails are centrally controlled by model owners.
Short-term, expect tightened ToS enforcement, more restrictive token issuance, and a wave of orchestration vendors pitching governed OpenClaw-compatible gateways for enterprises.
Mid-term, organizations must weigh higher API and hosting costs against the fragility of relying on third-party agent wrappers connected to identity-critical services.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Runlayer introduces enterprise governance for OpenClaw agent security
Runlayer released a commercial governance layer that discovers unmanaged OpenClaw agents and enforces low-latency controls to stop dangerous tool calls and credential exfiltration. The product combines endpoint/cloud discovery, SIEM integration, identity-aware policy enforcement and sub-100ms interception; internal tests and customer pilots show large gains against prompt-based takeovers and exfiltration chains.
Publishers Restrict Internet Archive Access as AI Scraping Risks Rise
Several major news organizations are blocking the Internet Archive’s crawlers amid worries that AI companies could use the Archive as a conduit to collect paywalled journalism. The change intensifies legal and commercial conflicts over training data and raises short-term risks to public access and long-term questions about how journalistic content will be governed for AI use.
Google warns of large-scale prompting campaign to clone Gemini
Google disclosed that actors prompted its Gemini model at scale to harvest outputs for use in building cheaper imitations, with at least one campaign issuing over 100,000 queries. The company frames the activity as theft of proprietary capabilities and signals a rising threat vector for LLM operators, with technical and legal consequences ahead.

Anthropic Accuses DeepSeek, MiniMax and Moonshot of Distillation Mining of Claude
Anthropic alleges three mainland-China labs used over 24,000 fake accounts to record roughly 16 million exchanges from its Claude model to perform large-scale distillation; OpenAI and other industry disclosures show similar extraction tactics but have not independently verified Anthropic’s full counts, deepening policy and legal debates over export controls, telemetry, and model-protection measures.

OpenAI hires OpenClaw creator to accelerate consumer AI agents
OpenAI has recruited Peter Steinberger, the developer behind OpenClaw, to lead its push into consumer-grade personal agents while OpenClaw will be transferred to an independent foundation and remain open source. The project’s strong community traction (roughly 196,000 GitHub stars and ~2 million weekly visitors) and recent integrations into major apps have attracted sizeable offers — but independent researchers have also flagged practical security exposures that will need remediation as the technology scales.

Austria-born OpenClaw’s rapid ascent sparks productivity promise and security warnings
OpenClaw, an open-source desktop AI agent created by an Austrian developer, has drawn rapid developer interest for automating multi-step tasks locally while connecting to large language models — but independent scans and practical tests have revealed hundreds of misconfigured or internet-reachable deployments that can leak bot tokens, API keys, OAuth secrets and full chat transcripts. The combination of broad system access, persistent memory and external connectivity has prompted both excitement about productivity gains and urgent warnings from security researchers and vendors to inventory deployments, lock down network exposure and rotate credentials.

DeepMind opens Project Genie to U.S. Google AI Ultra users, seeks real-world feedback on interactive world models
DeepMind has opened a constrained preview of Project Genie to U.S. Google AI Ultra subscribers to collect hands-on feedback for its Genie 3-powered world model. The prototype generates short, explorable virtual environments from text or images but is limited by compute, safety guardrails, and nascent interactivity.

Google GTIG Disrupts IPIDEA Residential Proxy Network in the United States
Google's Threat Intelligence Group, allied with infrastructure partners, dismantled the IPIDEA residential proxy operation that hijacked Android phones and Windows PCs to relay adversary traffic. The takedown targeted command-and-control points, shut down domains and updated detection signals to hinder future reuse of the same toolset.