LiteLLM Was Compromised. Here Is What You Need to Know.

LiteLLM Was Compromised. Here Is What You Need to Know.

The LiteLLM attack exposed a core weakness in many AI gateways: centralized, plain-text credentials. This post explains why that’s risky and how MCPX limits exposure through isolation and controlled access.

Rony Sinai, Solution Engineer

Rony Sinai, Solution Engineer

March 25, 2026

MCPX

AI Security Architecture

On March 24, two malicious versions of LiteLLM hit PyPI. For about three hours, anyone who installed or updated the package pulled down a credential-stealing backdoor.

Here is what happened:

A threat group called TeamPCP (linked to LAPSUS$) compromised Trivy, the popular container vulnerability scanner, through a GitHub Actions misconfiguration. From there, they pivoted to KICS and then to LiteLLM. Because LiteLLM's CI/CD pipeline runs Trivy, the compromised scanner exfiltrated LiteLLM's PyPI publishing token. The attacker used it to push two weaponized package versions. Once installed, the malware:

  • Steals credentials across the entire machine: cloud provider keys, API tokens, SSH keys, database passwords, Kubernetes secrets, Slack and Discord tokens, even crypto wallets.
  • Encrypts and exfiltrates everything to an attacker-controlled domain disguised as legitimate LiteLLM infrastructure.
  • Persists and spreads by installing a hidden backdoor that survives reboots, and in Kubernetes environments, deploying privileged containers across every node in the cluster.

Now What: Rethinking Your AI Gateway

If you are affected, the immediate steps are clear: uninstall, rotate every credential, and audit your clusters.

But remediation is not the end of the conversation. LiteLLM, by design, concentrates every LLM provider API key, every team's access tokens, and every routing policy into a single process. That concentration is what makes an AI gateway useful.It is also what made this breach so damaging: one compromised dependency exposed every credential the gateway manages, with no isolation layer between the secrets and the runtime.

After you rotate your keys, the question becomes: do you put them back in the same architecture?

How MCPX and Lunar Handle This Differently

We have been building Lunar's AI Gateway and MCPX against exactly this threat model. Here is what is architecturally different and why it matters for this specific attack.

1. Secrets by reference, not by value

In most setups, including LiteLLM's, API keys are passed as plain text in environment variables or config files. Any process with access to that environment can read and steal those keys. That is exactly what the malware did.

MCPX passes secrets by reference only. The actual secret value never travels through the data flow.A tool or end user receives a reference ID. The underlying credential is resolved server-side and never exposed to the caller.

MCPX integrates natively with a secrets manager.Secrets never need to live in your codebase, config files, or environment variables.

Impact: even if the malware compromised a tool or intercepted a request, it would retrieve a reference ID with no value. Not a usable credential.

‍

‍

2. Plain text data is filtered and labeled

The malware harvested .env files, config files, and any plain text data it could find. That is only possible when sensitive data flows through a system unguarded.

MCPX enforces admin-defined filtering and labeling on data flows. The admin controls what data is visible, how it is classified, and what passes through. Sensitive plain text is filtered before it reaches the end-user context.

‍Impact: Even if data in transit is intercepted, it has already been sanitized and labeled , attackers cannot extract useful plain text credentials from the stream.

‍

3. End users never see admin-managed secrets

The LiteLLM breach was so damaging because the process running LiteLLM had direct access to all credentials in the environment. One compromised dependency meant full credential exposure.

In MCPX, secrets are managed exclusively at the admin layer. End users interact with the system without ever seeing, touching, or having access to the underlying credentials.

‍Impact: A compromised end-user session or tool cannot escalate to credential access. The blast radius is strictly contained.

4. Key rotation is instant and centralized

One of the most painful realities of the LiteLLM breach is remediation. Every affected team has to hunt down and rotate credentials scattered across machines, .env files, CI/CD pipelines, and developer laptops. Individually. Manually. With no guarantee that nothing was missed.

In MCPX, all credentials are managed in one place: the admin panel. Replacing an API key is a single operation that propagates immediately to all end users. No chasing down environments. No coordinating with individual developers.

‍Impact: Credential rotation goes from a multi-day firefighting exercise to a single admin action. Response time shrinks from hours to minutes.

‍

At a glance: why architecture matters

Here’s how the two approaches compare under the exact conditions of this attack:

Risk Factor LiteLLM (Breached) MCPX
Secrets stored as plain text βœ… Yes ❌ No, passed by reference
Plain text data exposed in transit βœ… Yes ❌ No, filtered and labeled
Single compromised tool = full credential access βœ… Yes ❌ No, blast radius is contained
Credential rotation requires touching every environment βœ… Yes ❌ No, one admin action with instant propagation

What Comes Next

TeamTNT is not done. Trivy, KICS, OpenVSX extensions, and now LiteLLM. They consistently target widely-deployed tools to pivot into higher-value downstream targets. AI infrastructure, with its concentration of API keys and cloud credentials, is the obvious next frontier.

As agents get more capable and more deeply integrated through MCP, the attack surface grows. Every tool connection, every credential, every agent action is a vector.

The teams that build governance and security into their AI infrastructure now will be the ones that scale with confidence.

Learn more

If you want to go deeper into the challenges around MCP, tool management, and secure AI infrastructure, these are great follow-ups:

Ready to Start your journey?

Govern all agentic traffic in real time with enterprise-grade security and control. Deploy safely on-prem, in your VPC, or hybrid cloud.