Microsoft Research announced the 15 billion parameter open-weight model Phi-4-reasoning-vision-15B on March 4, 2026. The lab says the release is designed to deliver stronger multimodal reasoning, math and science performance, and computer-use ability without the compute profile of much larger systems.
#open-weight
RSS FeedLLM Mar 24, 2026 2 min read
LLM Hacker News Mar 7, 2026 2 min read
A Hacker News thread surfaced OBLITERATUS, an open-source project that studies and alters refusal behavior in open-weight LLMs without retraining. The interesting part is not just the capability claim but the project’s framing as a shared telemetry-backed research pipeline for comparing safety-editing methods across models and hardware.
LLM Reddit Feb 17, 2026 2 min read
A r/LocalLLaMA post on Qwen3.5 gained 123 upvotes and pointed directly to public weights and model documentation. The linked card confirms key specs including 397B total parameters, 17B activated, and 262,144 native context length.