The LiteLLM Supply Chain Attack: When Trivy Was Just the Beginning

Miguel Martinez
How attackers used the Trivy compromise to poison LiteLLM on PyPI, what you should do if you use LiteLLM or its dependents, and how to defend against credential-based supply chain attacks.

Last week we wrote about the Trivy supply chain attack. And I’m afraid we’re just getting started.

On March 24, attackers used the Trivy compromise to poison LiteLLM, the Python library that half the AI ecosystem uses to talk to language models. Two poisoned versions lived on PyPI for about three hours. For a package downloaded 3.4 million times a day, three hours is a lot. If you’ve built anything with DSPy, CrewAI, OpenHands, MLflow, or dozens of other AI frameworks, there’s a good chance LiteLLM is somewhere in your dependency tree.

The techniques in this attack aren’t new. Neither are the defenses. But supply chain security is genuinely hard, and the ecosystem is moving faster than ever. That makes it worth revisiting the fundamentals.

What Happened

This is the latest step in a month-long campaign across five ecosystems by the TeamPCP attacker group: GitHub Actions, Docker Hub, npm, OpenVSX, and PyPI. All stemming from a single incomplete incident response back in February.

LiteLLM’s CI/CD pipeline ran Trivy as a security scanner. When Trivy got compromised on March 19, the attacker’s code ran inside LiteLLM’s pipeline and grabbed the PYPI_PUBLISH token from the GitHub Actions runner environment. Five days later, the attacker used it.

At 10:39 UTC on March 24, litellm==1.82.7 appeared on PyPI. Thirteen minutes later, 1.82.8 followed. Both carried a three-stage payload:

  1. Harvest everything. SSH keys, .env files, AWS/GCP/Azure credentials, Kubernetes secrets, Docker registry credentials, crypto wallets.
  2. Encrypt and exfiltrate. AES-256 + RSA-4096, sent to models.litellm.cloud, a domain registered the day before.
  3. Persist and spread. A systemd backdoor polling every 5 minutes. If the script found a Kubernetes service account token, it read every secret across every namespace and deployed privileged pods to every node.

The second version (1.82.8) dropped a .pth file, a Python startup hook that fires every time any Python process starts. Not when you import LiteLLM. Every. Single. Time. Including when pip runs.

That’s what got it caught. The .pth hook spawns a subprocess, which triggers the hook again, creating an accidental fork bomb. A developer at FutureSearch noticed his laptop eating all its RAM, dug into it, and sounded the alarm. The attacker used the compromised maintainer’s GitHub account to close the issue and flood it with bot comments. The community opened a new tracking issue and moved to Hacker News.

PyPI quarantined the package about three hours after the first malicious publish.

What You Should Do Right Now

If you use Trivy or Checkmarx KICS in your CI/CD

You might be in the same position LiteLLM was. Both Trivy and Checkmarx KICS were compromised as part of this campaign. The attacker used these tools to steal credentials from CI/CD runners. LiteLLM is just the consequence we know about so far.

  • Check for imposter commits in your repository. Look for unsigned commits where you’d expect signatures, or commits outside your normal PR flow.
  • Audit your CI/CD secrets. If Trivy or KICS ran in your pipeline during the compromised windows (Trivy: March 19-21, KICS: March 23), assume every secret accessible to that runner was exposed. Rotate them.
  • Pin to known-good SHAs, not version tags. We covered this in our Trivy post.

If you use LiteLLM in your projects

You might not use LiteLLM directly, but it could be in your dependency tree. Projects like DSPy, CrewAI, OpenHands, MLflow, langwatch, and others pull it in. Check whether it’s installed:

# Check if litellm is installed at all
pip show litellm 2>/dev/null && pip show litellm | grep Version

# Check which package pulled it in
pip show litellm 2>/dev/null | grep Required-by

If you find any version of LiteLLM in your environment, rotate your credentials. If you’re specifically on 1.82.7 or 1.82.8, treat that environment as fully compromised. The payload runs at install time, not at application startup.

  • Rotate everything. SSH keys, cloud credentials, API keys, Docker registry credentials, Kubernetes service account tokens, database passwords.
  • Check for persistence artifacts:
ls -la ~/.config/sysmon/sysmon.py 2>/dev/null && echo "BACKDOOR FOUND"
systemctl --user status sysmon.service 2>/dev/null
ls /tmp/tpcp.tar.gz /tmp/session.key /tmp/payload.enc 2>/dev/null && echo "EXFIL ARTIFACTS FOUND"
  • If you’re running Kubernetes, check for pods named node-setup-* in kube-system.
  • Reinstall on a fresh environment. Pin to litellm<=1.82.6.

What Could Have Helped

This wasn’t a typosquatting attack. The attacker had legitimate publishing credentials. That changes the calculus.

Pin your dependencies

LiteLLM’s own Docker image users were unaffected. Their requirements.txt pinned exact versions, so builds never pulled the poisoned release. Aider was also confirmed safe because they pinned to litellm==1.82.3.

And here’s a fun fact about Python: there is no package-lock.json. Vanilla pip with requirements.txt gives you a list of names and versions. No integrity hashes, no lock file. You need poetry, pipenv, uv, or pip-compile to get lock files with hashes. Even then, hashes wouldn’t have caught this. The malicious package was published with legitimate credentials, so PyPI’s hashes matched perfectly. Hash verification tells you “you got what PyPI advertised,” not “what PyPI advertised is safe.”

Stop using static secrets

The LiteLLM attacker stole a PyPI publish token from a CI runner. That token was a static secret. It didn’t expire, wasn’t scoped to a single run, and was sitting in the environment for any process to read.

  • Use OIDC tokens instead of static credentials. GitHub Actions, GitLab CI, and most major CI platforms support OpenID Connect identity federation. For PyPI specifically, Trusted Publishers lets you publish without storing any token at all. The token is scoped to a single workflow run and expires in minutes.
  • Scope your pipeline secrets. Your security scanner doesn’t need your publish token. Each step should see only the secrets it needs.
  • Sandbox your tools. Run third-party tools in containers or sandboxed environments. They don’t need access to ~/.ssh or ~/.aws.

Lock down Kubernetes

If the payload found a service account token (mounted by default), it read all secrets cluster-wide and deployed privileged pods to every node.

  • Use least-privilege service accounts. A CI runner doesn’t need cluster-admin.
  • Disable automatic token mounting (automountServiceAccountToken: false) for pods that don’t need K8s API access.
  • Enforce Pod Security Standards. Prevent privileged containers and host filesystem mounts.
  • Restrict RBAC for secrets to the namespace and specific secrets each workload needs.

Closing

I don’t want to be alarmist, but we’re probably just seeing the tip of the iceberg. The compromised Trivy and Checkmarx KICS actions ran inside CI/CD pipelines across the ecosystem. Every runner they touched was a potential credential harvest. LiteLLM is what surfaced. What about the pipelines nobody has audited yet? The tokens that were exfiltrated but haven’t been used? Five ecosystems in a month. There’s no reason to think they’ve stopped.

This isn’t a new problem. But it’s one that demands renewed attention. We’re shipping faster than ever. A whole generation of developers is entering the field through AI-assisted tooling, and that’s genuinely exciting. But speed without guardrails is how we got here. The ecosystem owes these developers secure defaults, not just documentation. Platforms need to make the safe path the obvious path. And those of us who’ve been through enough supply chain incidents to lose sleep over them? We need to step up. Share what we know. Build the guardrails. Make supply chain security a first-class concern, not a post-mortem talking point.

At Chainloop, we’re doing our part: keyless attestations to kill static secrets, commit and signature verification, compliance gates on releases. We covered the details in our Trivy post. But we know we can do more. And so can the rest of the industry. If you’re working on this problem too, let’s talk.

; ---