LocalLLaMA Flags a Nemotron License Update That Reduces Friction for Derivative Use
Original: Nvidia updated the Nemotron Super 3 122B A12B license to remove the rug-pull clauses View original →
A licensing change that the open-model community immediately noticed
On March 15, 2026, r/LocalLLaMA surfaced a licensing update for the Nemotron Super 3 120B A12B family. At crawl time the post had 121 upvotes and 44 comments. The useful part of the thread was not the poster’s AI-generated summary, but the linked primary sources: NVIDIA’s old and new license pages plus Hugging Face commits for the BF16, FP8, and NVFP4 variants.
The clearest evidence is the BF16 model-card commit on Hugging Face, which changes license_name from nvidia-open-model-license to nvidia-nemotron-open-model-license and swaps the linked license URL accordingly. That does not by itself answer every legal question, but it does establish that the Nemotron Super 3 release metadata was updated to point at a different NVIDIA license text rather than merely receiving a cosmetic README edit.
What changed in the license text
The older NVIDIA Open Model License Agreement, last modified on October 24, 2025, included language stating that rights would automatically terminate if a user bypassed, disabled, reduced the efficacy of, or circumvented a safety guardrail without a substantially similar guardrail for the use case. It also referenced NVIDIA’s separate Trustworthy AI terms and carried additional language around Special-Purpose Models. By contrast, the NVIDIA Nemotron Open Model License, last modified on December 15, 2025, presents a shorter, self-contained grant: works are commercially usable, derivative works may be created and distributed, outputs are not claimed by NVIDIA, and redistribution centers on carrying the license, retaining notices, and including the specified Nemotron notice text when a NOTICE file is present.
Importantly, the new Nemotron license text shown on NVIDIA’s site does not include the older automatic-termination clause tied to modifying or bypassing guardrails, and it does not point out to a separate Trustworthy AI document. The license grant is described as perpetual and irrevocable, with termination centered instead on patent or copyright litigation against the work. That is why the change was immediately relevant to LocalLLaMA users who care about fine-tuning, redistribution, and operational clarity around open-weight models.
Why this matters, with one caveat
The practical significance is that the license surface becomes easier to reason about from the model card outward. Community operators no longer have to read the old Open Model text and a separate ethics URL to understand the baseline terms attached to the published Nemotron weights. That said, this is still a licensing update, not a blanket legal clearance. Teams using Nemotron in products or redistribution flows still need to read the actual NVIDIA text, especially if compliance review or commercial deployment is involved.
Primary sources: Hugging Face BF16 commit, older NVIDIA Open Model License, NVIDIA Nemotron Open Model License. Community discussion: r/LocalLLaMA.
Related Articles
A high-signal LocalLLaMA thread on March 15, 2026 focused on a license swap for NVIDIA’s Nemotron model family. Comparing the current NVIDIA Nemotron Model License with the older Open Model License shows why the community reacted: the old guardrail-termination clause and Trustworthy AI cross-reference are no longer present, while the newer text leans on a simpler NOTICE-style attribution structure.
NVIDIA AI Developer introduced Nemotron 3 Super on March 11, 2026 as an open 120B-parameter hybrid MoE model with 12B active parameters and a native 1M-token context window. NVIDIA says the model targets agentic workloads with up to 5x higher throughput than the previous Nemotron Super model.
NVIDIA introduced Nemotron 3 Super on March 11, 2026 as an open 120B-parameter model built for agentic AI systems. The company says the model tackles long-context cost and reasoning overhead with a 1M-token window, hybrid MoE design and up to 5x higher throughput.
Comments (0)
No comments yet. Be the first to comment!