
Google: Public GCP API Keys Became Gemini Credentials, Exposing Data
Context and Chronology
Researchers at Truffle Security scanned public web archives and discovered that certain Google Cloud keys, long used merely to meter services, were suddenly accepted by the Gemini API as authentication tokens. The change appears to have been introduced when Google rolled out its generative language endpoints, producing an unannounced escalation in key privileges that left some projects readable by anyone who could view site source. During a November scan, Truffle flagged 2,863 live keys across commercial and public-sector accounts, including internal projects belonging to cloud-first enterprises.
Immediate Risk Profile
An exposed key no longer only allowed third-party embedding of maps or video metering; it could be leveraged to query or extract uploaded documents, cached conversation context, and other assets stored via the generative endpoint. Attackers able to scrape client-side HTML could both retrieve confidential material and run API calls that consume quota, creating the dual risk of data disclosure plus unexpected billing. Truffle reproduced an abuse scenario that mirrored a billing event of roughly $55,444, illustrating the tangible financial harm that follows credential misuse.
Broader Threat: Model Extraction and Competitive Abuse
Google has separately reported coordinated campaigns that repeatedly query Gemini at scale to collect outputs for training knockoff models. If an adversary combines mass-query tactics with leaked keys, the same exposed credentials can be used not only to harvest customer data but also to issue hundreds of thousands of prompts that produce material useful for cloning the model. Such combined abuse amplifies commercial and legal risk: incumbents face both customer data breaches and erosion of intellectual property value when model outputs are systematically harvested.
Response and Remediation Actions
After disclosure, Google restricted the flagged keys from accessing the generative endpoint and acknowledged the behavior as a bug, while continuing to work on a systemic fix beyond the immediate mitigations. Administrators are advised to enumerate any keys tied to the Generative Language API, rotate any public or unrestricted keys, and apply tighter constraints to usage and HTTP referrers. Defenders should also add telemetry to detect mass-query patterns, enforce stricter rate limits, and consider output watermarking or poison-injection strategies to reduce the utility of harvested responses. Google has indicated future controls that default new AI-created keys to Gemini-only scope and will block detected leaked keys, but those measures do not retroactively guarantee safe posture for legacy keys.
Strategic Implications
This event underscores a recurring governance gap: cloud primitives designed for public embedding were repurposed without provider-to-developer notification, producing service-level privilege drift across accounts. Enterprises with mixed client-side integrations must now treat previously benign tokens as sensitive credentials in the same class as service account secrets and API tokens. Security teams should incorporate automated scans for public key leakage into their CI/CD pipelines, augment monitoring to spot mass-query extraction attempts, and extend threat models to include generative-model data exfiltration, IP theft, and invoice manipulation vectors.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Google warns of large-scale prompting campaign to clone Gemini
Google disclosed that actors prompted its Gemini model at scale to harvest outputs for use in building cheaper imitations, with at least one campaign issuing over 100,000 queries. The company frames the activity as theft of proprietary capabilities and signals a rising threat vector for LLM operators, with technical and legal consequences ahead.

Google DeepMind restricts Antigravity access, cutting OpenClaw integrations
Google DeepMind suspended Antigravity access for OpenClaw-based integrations, citing abusive usage and service degradation. The action blocks a path to Gemini tokens and accelerates a shift toward closed, vertically controlled agent stacks.

Google prepares Gemini to act inside Android apps to place orders and book rides
A teardown of Google’s beta app indicates Gemini may gain an opt‑in ability to automate interactions inside third‑party Android apps—simulating taps and form fills to complete tasks like ordering food or hailing rides—backed by platform hooks, certified app support and human review of some interaction traces. The feature is drawing regulatory and legislative attention (including a letter from Senator Elizabeth Warren about in‑chat commerce), raising fresh questions about merchant signals, data flows, payment safeguards and the need for clear consent and disclosure.

Google trials Gemini tool to import rival AI chat histories (United States)
Google is experimenting with a Gemini function that would let users upload conversation archives from other chatbots so they can continue projects and preserve personalised context. If launched, the capability would lower switching friction, raise technical and privacy questions about memory mapping, and potentially accelerate user migration toward Gemini.
Warren Demands Details From Google on Gemini’s In‑Chat Checkout and Data Sharing
Sen. Elizabeth Warren has asked Google CEO Sundar Pichai for a detailed explanation of what user signals will be shared with retailers after Google announced a checkout feature for its Gemini chatbot, warning that combining conversational context, search history and merchant data could steer purchases and create opaque preferential treatment. The inquiry comes as reported commercial deals and investor scrutiny over Gemini’s licensing and cloud ties raise the stakes for how data, compute and revenue flows are governed.

Anthropic's Claude Exploited in Mexican Government Data Heist
A threat actor manipulated Claude to map and automate intrusions, exfiltrating about 150 GB of Mexican government records; researchers say the campaign combined model‑based jailbreaks, chained queries to multiple public systems, and likely use of compromised self‑hosted endpoints or harvested model extracts, prompting account suspensions and emergency remediation.
Security flaws in popular open-source AI assistant expose credentials and private chats
Researchers discovered that internet-accessible instances of the open-source assistant Clawdbot can leak sensitive credentials and conversation histories when misconfigured. The exposure enables attackers to harvest API keys, impersonate users, and in one test led to extracting a private cryptographic key within minutes.

Google disrupts UNC2814 GridTide espionage campaign
Google and partners dismantled a cloud‑hosted espionage operation that used spreadsheets and SaaS APIs as covert command channels, attributed to the actor UNC2814 and a backdoor called GridTide . The takedown affects at least 53 organizations across 42 countries and highlights an accelerating trend: cloud services are becoming primary vectors for stealthy state‑linked intrusions.