PyTorch Foundation adds Safetensors and Helion to its hosted project stack

Original: Safetensors and Helion have joined PyTorch Foundation as foundation-hosted projects to secure model distribution for trusted agentic solutions and simplify kernel development across the open source AI ecosystem. PyTorch Foundation CTO Matt White to Noah Bovenizer at The Stack: “Having portable formats that work across different frameworks is extremely important to be able to ship and move models around. And then Helion makes things more accessible for folks that want to do custom kernel development.” Safetensors and Helion join PyTorch, @vllm_project, @DeepSpeedAI, and @raydistributed as foundation hosted projects. Read Noah Bovenizer’s coverage at The Stack here: https://www.thestack.technology/hugging-faces-safetensors-metas-helion-join-pytorch-foundation/ #PyTorch #OpenSource #AI #Safetensors #Helion View original →

Read in other languages: 한국어日本語
AI Apr 9, 2026 By Insights AI 1 min read 1 views Source

On April 9, 2026, PyTorch said in an X post that Safetensors and Helion have joined the PyTorch Foundation as foundation-hosted projects. PyTorch framed the move as a way to secure model distribution for trusted agentic solutions and to simplify kernel development across the open-source AI ecosystem. The company added that the two projects now sit alongside PyTorch, vLLM, DeepSpeed, and Ray under the foundation’s hosted-project structure.

The two additions cover different layers of the stack. Safetensors is Hugging Face’s format for serializing tensors with a safer loading model than pickle-based approaches, which has made it relevant wherever teams want to move model weights without expanding arbitrary code-execution risk. Helion is Meta’s domain-specific language for machine-learning kernels, aimed at making custom kernel work more accessible. In coverage by The Stack quoted by PyTorch, Foundation CTO Matt White said portable formats across frameworks are critical for shipping and moving models around, and that Helion lowers the barrier for custom kernel development.

Governance is the real headline

The broader significance is governance, not just project count. By moving core pieces of model transport and kernel tooling into a neutral foundation structure, the PyTorch ecosystem is trying to make important infrastructure look less like vendor-specific add-ons and more like shared public rails. For teams building agent systems, that could matter in two ways: safer defaults for model exchange and a wider contributor base around performance-critical kernels. The announcement is a reminder that open-source AI competition increasingly plays out in formats, runtimes, and governance models as much as in the models themselves.

Share: Long

Related Articles

AI Reddit 4d ago 2 min read

Netflix’s VOID reached Reddit as an open research release aimed at removing objects from video and repairing the interactions those objects caused in the scene. The notable details are the CogVideoX base, a two-pass pipeline, Gemini+SAM2 mask generation, and a 40GB+ VRAM requirement.

AI Hacker News 7h ago 2 min read

Astral’s April 8, 2026 post became an HN talking point because it turned supply-chain security into concrete CI/CD practice. The key pieces were banning risky GitHub Actions triggers, hash-pinning actions, shrinking permissions, isolating secrets, and using GitHub Apps or Trusted Publishing where Actions defaults fall short.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.