HN did not greet GPT-5.5 with applause first. The thread went straight to pricing, context tiers, and whether the model actually behaves better once real coding work starts.
#reasoning
RSS FeedWhy it matters: this is one of the first external benchmark reads to land right after the GPT-5.5 launch. Artificial Analysis said GPT-5.5 moved 3 points clear on its Intelligence Index, while the full index run still became roughly 20% more expensive.
AI at Meta said on April 8, 2026 that Muse Spark is a natively multimodal reasoning model with tool use, visual chain of thought, and multi-agent orchestration. Meta's official announcement says it already powers the Meta AI app and meta.ai, is rolling out across WhatsApp, Instagram, Facebook, Messenger and AI glasses, and is entering private-preview API access for selected partners.
A Hacker News thread amplified Meta's launch of Muse Spark, a multimodal reasoning model with tool use, visual chain of thought, and a parallel-agent Contemplating mode.
Right after ARC Prize released ARC-AGI 3, r/singularity focused on the benchmark’s shift toward interactive environments and action-efficient scoring. The core message is that frontier AI still lags badly when it must generalize, explore, and plan under tight interaction budgets.
Mistral announced Mistral Small 4 on March 16, 2026 as a single open model that combines reasoning, multimodal input, and agentic coding. Key specs include 119B total parameters, 6B active parameters per token, a 256k context window, Apache 2.0 licensing, and configurable reasoning effort.
ARC Prize introduced ARC-AGI-3 on March 24, 2026 as a benchmark for frontier agentic intelligence in novel environments. On Hacker News it reached 238 points and 163 comments, signaling strong interest in evaluation methods that go beyond static tasks.
Microsoft Research announced the 15 billion parameter open-weight model Phi-4-reasoning-vision-15B on March 4, 2026. The lab says the release is designed to deliver stronger multimodal reasoning, math and science performance, and computer-use ability without the compute profile of much larger systems.
Together AI said on March 19, 2026 that its fine-tuning service now supports tool-call, reasoning, and vision-language workflows. The linked Together AI blog adds 100B+ parameter model support, datasets up to 100GB, up to 6x higher throughput on large MoE models, and upfront cost plus ETA estimates.
On Feb. 19, 2026, Google introduced Gemini 3.1 Pro and began rolling it out across AI Studio, Gemini CLI, Antigravity, Android Studio, Vertex AI, Gemini Enterprise, the Gemini app, and NotebookLM. Google says the model reached 77.1% on ARC-AGI-2, more than doubling Gemini 3 Pro’s reasoning performance on that benchmark.
A Show HN post points to llm-circuit-finder, a toolkit that duplicates selected transformer layers inside GGUF models and claims sizable reasoning gains without changing weights or running fine-tuning. The strongest benchmark numbers come from the project author’s own evaluations rather than independent validation.
Together AI said on March 19, 2026 that its fine-tuning service now supports tool calling, reasoning, and vision-language model training, with up to 6x higher throughput on MoE architectures. The company says the update also targets very large models, supports datasets up to 100GB, and adds pre-run cost estimates plus live ETAs during training.