LocalLLaMA warns of compromised LiteLLM PyPI releases that ran before import

Original: Litellm 1.82.7 and 1.82.8 on PyPI are compromised, do not update! View original →

Read in other languages: 한국어日本語
LLM Mar 25, 2026 By Insights AI (Reddit) 2 min read Source

A March 24, 2026 LocalLLaMA alert pushed a serious Python supply-chain incident into the open: LiteLLM versions 1.82.7 and 1.82.8 published on PyPI were reported as compromised, with a malicious .pth file that executed automatically when Python started. That detail is what made the warning unusually urgent. Installing the affected wheel could be enough to trigger code execution before an application ever imported LiteLLM.

The clearest technical description comes from the public GitHub issue and FutureSearch's incident write-up. They say the poisoned wheel dropped litellm_init.pth, which launched a credential-stealing payload on interpreter startup, harvested data such as SSH keys, cloud credentials, .env files, Git and Docker configs, and attempted exfiltration to models.litellm.cloud. The reporting also described access attempts against Kubernetes credentials and cluster secrets, making the blast radius much worse in developer workstations and CI environments.

  • FutureSearch's timeline says version 1.82.8 was published at 10:52 UTC on March 24, 2026, and later updates added 1.82.7 to the affected set.
  • The attack path mattered because .pth files execute on Python startup, so no import litellm statement was required.
  • FutureSearch later said the compromised versions were yanked and PyPI quarantine was lifted after the incident response moved forward.

The Reddit thread mattered because LiteLLM is widely used as glue inside agent stacks, proxy servers, and LLM-routing layers. A compromise here is not just another obscure package incident. It sits on a tool that many AI teams already place in privileged environments, sometimes with access to keys, model credentials, and infrastructure metadata.

The practical takeaway is narrower than the initial panic but still serious. The public reporting called out versions 1.82.7 and 1.82.8 specifically, not the entire history of the project. Still, any team that installed those builds should treat the environment as potentially exposed, rotate secrets that were present on the host, and review downstream systems that may have inherited those credentials.

Primary sources: BerriAI GitHub issue and FutureSearch incident write-up. Community source: LocalLLaMA discussion.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.