Meta releases SAM 3.1 with object multiplexing for faster multi-object video tracking

Original: We’re releasing SAM 3.1: a drop-in update to SAM 3 that introduces object multiplexing to significantly improve video processing efficiency without sacrificing accuracy. We’re sharing this update with the community to help make high-performance applications feasible on smaller, more accessible hardware. 🔗 Model Checkpoint: https://go.meta.me/8dd321 🔗 Codebase: https://go.meta.me/b0a9fb View original →

Read in other languages: 한국어日本語
AI Mar 31, 2026 By Insights AI 2 min read Source

What Meta posted on X

On March 27, 2026, Meta announced SAM 3.1 as a drop-in update to SAM 3. The X post highlights object multiplexing as the key change, saying it significantly improves video processing efficiency without sacrificing accuracy. Meta also framed the release as a way to make high-performance applications more practical on smaller and more accessible hardware.

That framing matters because video segmentation and tracking workloads become expensive quickly as the number of objects rises. A system that works well for one or two objects can slow down sharply when it has to follow many items across long clips. Meta is explicitly arguing that SAM 3.1 is not just a checkpoint refresh, but an efficiency upgrade aimed at real-world deployment constraints.

What the release notes add

The SAM 3.1 release notes on GitHub say the update introduces Object Multiplex, a shared-memory approach for joint multi-object tracking. In the earlier SAM 3 pipeline, each tracked object was processed independently, so the compute cost scaled roughly linearly with object count. SAM 3.1 instead groups objects into fixed-capacity buckets and processes them jointly, which reduces redundant computation.

Meta reports several concrete improvements. The company says SAM 3.1 delivers an approximate 7x speedup at 128 objects on a single H100 GPU relative to the November 2025 SAM 3 release. The release also includes inference optimizations such as reduced CPU-GPU synchronization, improved torch.compile support, and more batching in postprocessing and the vision encoder.

The benchmark picture is not presented as a clean win on every metric, but Meta does claim meaningful gains where scaling pressure is highest. The release notes highlight a +2.1 cgF1 improvement on YT-Temporal-1B and better VOS performance on 6 of 7 benchmarks, including +2.0 on MOSEv2. Meta also published new SAM 3.1 checkpoints on Hugging Face and notes that developers need the latest repository code to use them.

Why this matters

The larger signal is that open computer vision models are being pushed toward deployment efficiency, not just raw accuracy. In many production settings such as robotics, video analytics, sports analysis, and editing tools, the bottleneck is often cost per frame or the number of simultaneous tracks, not whether a model edges out another one on a single benchmark.

If Meta’s efficiency claims hold in practice, SAM 3.1 should make dense multi-object video workflows easier to run on more constrained hardware budgets. Because Meta released both updated checkpoints and code, the announcement is immediately actionable for developers and researchers rather than just a preview of future work.

Sources: Meta AI X post · SAM 3.1 release notes

Share: Long

Related Articles

AI 6d ago 2 min read

Meta on March 11, 2026 rolled out new anti-scam protections across Facebook, Messenger, and WhatsApp and later added a March 16 update on broader industry coordination. The program pairs AI-based detection with user alerts, advertiser verification, and law-enforcement partnerships after Meta reported removing 159 million scam ads in 2025.

AI 6d ago 2 min read

Meta said on February 24, 2026 that it had signed a long-term AI infrastructure agreement with AMD covering up to 6GW of AMD Instinct GPUs. The deal also aligns product roadmaps across chips, systems, and software, signaling a deeper attempt to diversify Meta’s AI compute stack.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.