CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◬ AI & Machine Learning May 05, 2026

Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts - The Hacker News

The Hacker News Archived May 05, 2026 ✓ Full text saved

Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts The Hacker News

Full text archived locally
✦ AI Summary · Claude Sonnet


    Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts Ravie LakshmananMar 31, 2026Cloud Security / AI Security Cybersecurity researchers have disclosed a security "blind spot" in Google Cloud's Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by an attacker to gain unauthorized access to sensitive data and compromise an organization's cloud environment. According to Palo Alto Networks Unit 42, the issue relates to how the Vertex AI permission model can be misused by taking advantage of the service agent's excessive permission scoping by default. "A misconfigured or compromised agent can become a 'double agent' that appears to serve its intended purpose, while secretly exfiltrating sensitive data, compromising infrastructure, and creating backdoors into an organization's most critical systems," Unit 42 researcher Ofir Shaty said in a report shared with The Hacker News. Specifically, the cybersecurity company found that the Per-Project, Per-Product Service Agent (P4SA) associated with a deployed AI agent built using Vertex AI's Agent Development Kit (ADK) had excessive permissions granted by default. This opened the door to a scenario where the P4SA's default permissions could be used to extract the credentials of a service agent and conduct actions on its behalf. After deploying the Vertex agent via Agent Engine, any call to the agent invokes Google's metadata service and exposes the credentials of the service agent, along with the Google Cloud Platform (GCP) project that hosts the AI agent, the identity of the AI agent, and the scopes of the machine that hosts the AI agent. Unit 42 said it was able to use the stolen credentials to jump from the AI agent's execution context into the customer project, effectively undermining isolation guarantees and permitting unrestricted read access to all Google Cloud Storage buckets' data within that project. "This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat," it added. That's not all. With the deployed Vertex AI Agent Engine running within a Google-managed tenant project, the extracted credentials also granted access to the Google Cloud Storage buckets within the tenant, offering more details about the platform's internal infrastructure. However, the credentials were found to lack the necessary permissions required to access the exposed buckets. To make matters worse, the same P4SA service agent credentials also enabled access to restricted, Google-owned Artifact Registry repositories that were revealed during the deployment of the Agent Engine. An attacker could leverage this behavior to download container images from private repositories that constitute the core of the Vertex AI Reasoning Engine. What's more, the compromised P4SA credentials not only made it possible to download images that were listed in logs during the Agent Engine deployment, but also exposed the contents of Artifact Registry repositories, including several other restricted images.  "Gaining access to this proprietary code not only exposes Google's intellectual property, but also provides an attacker with a blueprint to find further vulnerabilities," Unit 42 explained.  "The misconfigured Artifact Registry highlights a further flaw in access control management for critical infrastructure. An attacker could potentially leverage this unintended visibility to map Google's internal software supply chain, identify deprecated or vulnerable images, and plan further attacks." Google has since updated its official documentation to clearly spell out how Vertex AI uses resources, accounts, and agents. The tech giant has also recommended that customers use Bring Your Own Service Account (BYOSA) to replace the default service agent and enforce the principle of least privilege (PoLP) to ensure that the agent has only the permissions it needs to perform the task at hand. "Granting agents broad permissions by default violates the principle of least privilege and is a dangerous security flaw by design," Shaty said. "Organizations should treat AI agent deployment with the same rigor as new production code. Validate permission boundaries, restrict OAuth scopes to least privilege, review source integrity and conduct controlled security testing before production rollout." Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post. SHARE     Tweet Share Share SHARE  AI Security, Artifact Registry, Cloud Infrastructure, Cloud security, cybersecurity, Google Cloud, Vertex AI ⚡ Top Stories This Week FIRESTARTER Backdoor Hit Federal Cisco Firepower Device, Survives Security Patches Microsoft Patches Entra ID Role Flaw That Enabled Service Principal Takeover Researchers Uncover Pre-Stuxnet ‘fast16’ Malware Targeting Engineering Software ThreatsDay Bulletin: $290M DeFi Hack, macOS LotL Abuse, ProxySmart SIM Farms +25 New Stories Critical cPanel Authentication Vulnerability Identified — Update Your Server Immediately ⚡ Weekly Recap: Fast16 Malware, XChat Launch, Federal Backdoor, AI Employee Tracking and More Harvester Deploys Linux GoGra Backdoor in South Asia Using Microsoft Graph API Chinese Silk Typhoon Hacker Extradited to U.S. Over COVID Research Cyberattacks Bitwarden CLI Compromised in Ongoing Checkmarx Supply Chain Campaign Checkmarx Confirms GitHub Repository Data Posted on Dark Web After March 23 Attack Malicious KICS Docker Images and VS Code Extensions Hit Checkmarx Supply Chain LMDeploy CVE-2026-33626 Flaw Exploited Within 13 Hours of Disclosure Microsoft Confirms Active Exploitation of Windows Shell CVE-2026-32202 Researchers Discover Critical GitHub CVE-2026-3854 RCE Flaw Exploitable via Single Git Push Apple Fixes iOS Flaw That Let FBI Recover Deleted Signal Messages Vercel Finds More Compromised Accounts in Context.ai-Linked Breach Load More ▼ ⭐ Featured Resources Learn How Hidden Identity Blind Spots Weaken Your Security Systems [Guide] How to Enable Secure Data Movement Without Added Risk [Guide] Learn a Practical Framework to Evaluate AI Tools for Production [Webinar] Stop Chasing Alerts and Start Focusing on Real Exposures
    💬 Team Notes
    Article Info
    Source
    The Hacker News
    Category
    ◬ AI & Machine Learning
    Published
    May 05, 2026
    Archived
    May 05, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗