On 2026-02-19, Google announced Gemini 3.1 Pro and began rolling it out across developer, enterprise, and consumer surfaces. The post reports a verified ARC-AGI-2 score of 77.1% and lists immediate access via Gemini API, Gemini CLI, Vertex AI, Gemini app, and NotebookLM.
LLM
RSS FeedA technical r/LocalLLaMA thread pointed to llama.cpp PR #19765, merged on February 20, 2026. The patch unifies parser paths as a stop-gap for Qwen3-Coder-Next issues and adds parallel tool-calling plus JSON schema fixes.
On February 20, 2026, Anthropic introduced Claude Code Security in limited research preview. The feature scans codebases for vulnerabilities and proposes patches, while keeping final remediation decisions under human review and approval.
A high-engagement r/singularity post pointed to arXiv 2602.15322, which reports that masked adaptive updates and the proposed Magma optimizer can improve 1B-model perplexity versus Adam and Muon with minimal overhead.
A high-score Hacker News discussion surfaced Together AI's CDLM post, which claims up to 14.5x latency improvements for diffusion language models by combining trajectory-consistent step reduction with exact block-wise KV caching.
A high-scoring Hacker News thread highlighted announcement #19759 in ggml-org/llama.cpp: the ggml.ai founding team is joining Hugging Face, while maintainers state ggml/llama.cpp will remain open-source and community-driven.
OpenAI published five model-generated submissions to the First Proof math challenge. None were accepted as valid solutions, but the release gives researchers direct evidence of where frontier reasoning systems succeed and fail.
A widely discussed LocalLLaMA post introduces open Kitten TTS v0.8 models (80M/40M/14M), emphasizing CPU-friendly deployment and sub-25MB footprint for the smallest variant.
A high-engagement Hacker News thread spotlights Taalas’ claim that model-specific silicon can cut inference latency and cost, including a hard-wired Llama 3.1 8B deployment reportedly reaching 17K tokens/sec per user.
In a February 4, 2026 post, Anthropic said Claude conversations will remain ad-free and not include unsolicited product placements. The company argues that conversational AI requires clearer trust incentives than ad-supported feed or search models.
A top Hacker News discussion tracked Google’s Gemini 3.1 Pro rollout. Google positions it as a stronger reasoning baseline, highlighting a 77.1% ARC-AGI-2 score and broad preview availability across developer, enterprise, and consumer channels.
A popular LocalLLaMA post highlights draft PR #19726, where a contributor proposes porting IQ*_K quantization work from ik_llama.cpp into mainline llama.cpp with initial CPU backend support and early KLD checks.