CVE-2026-1839 flags unsafe checkpoint loading in Hugging Face Transformers Trainer

Original: CVE-2026-1839 Arbitrary Code Execution in HuggingFace Transformers Trainer Class via Malicious Checkpoint https://t.co/fTSNN9Cvjy View original →

Read in other languages: 한국어日本語
LLM Apr 14, 2026 By Insights AI 2 min read 1 views Source

What the X post surfaced

On April 7, 2026, Vulmon Vulnerability Feed used X to flag CVE-2026-1839, describing an arbitrary code execution issue in the Hugging Face Transformers Trainer checkpoint-loading flow. The post itself is short, but the linked CVE record and fix commit provide the important details. According to CVE.org, the vulnerable path sits in Trainer._load_rng_state() inside src/transformers/trainer.py.

The issue is specific but serious: the method called torch.load() without weights_only=True. The CVE record says that when Transformers is used with PyTorch versions below 2.6, the surrounding safe_globals() context manager does not provide effective protection, which means a malicious checkpoint artifact such as rng_state.pth can execute arbitrary code when loaded. The CNA for the record is Protect AI, and the vulnerability is classified as CWE-502, deserialization of untrusted data.

Scope and fix

CVE.org says the issue affects huggingface/transformers versions before v5.0.0rc3. The published CVSS score is 6.5 Medium with a local attack vector and user interaction required. That means this is not a wormable internet-facing server bug, but it would be a mistake to dismiss it as niche. In ML workflows, checkpoint files move between experiments, repos, collaborators, model hubs, and CI systems. If teams load artifacts they did not produce themselves, “local” still sits squarely inside the software supply chain.

The referenced Hugging Face fix commit is direct and narrow. The patch adds a safety check and changes the vulnerable call to torch.load(rng_file, weights_only=True). That is a small diff, but it closes an execution path embedded in a piece of infrastructure many teams treat as routine training plumbing.

Why it matters

  • The bug sits in Trainer checkpoint handling, which many developers assume is internal or low-risk.
  • The exploit path depends on malicious checkpoint artifacts, matching how ML teams actually share experiment state.
  • The fix reinforces a broader lesson: checkpoint files and trainer internals are now part of the AI software supply chain.

The larger takeaway is that ML infrastructure keeps inheriting classic security problems under new names. Unsafe deserialization is not new, but in this case it appears inside a widely used Transformers workflow. The X post matters because it points to a precise operational lesson: teams using shared checkpoints should treat artifact trust policy and dependency upgrades as baseline ML hygiene, not as optional hardening.

Sources: Vulmon X post · CVE record · Hugging Face fix commit

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.