On March 16, 2026, a r/LocalLLaMA post questioning OpenCode’s local behavior reached 389 points and 154 comments. The post argued that the `opencode serve` web UI path proxies to app.opencode.ai and backed that claim with a linked code path plus related GitHub issues and PRs.
LLM
RSS FeedA March 16, 2026 Hacker News post on a Cursor study reached 110 points and 61 comments. The paper says Cursor adoption raises project-level development velocity in the short run, but also produces a substantial and persistent rise in static analysis warnings and code complexity.
A March 16, 2026 Show HN post about Godogen reached 247 points and 153 comments. The project drew attention by showing an agent pipeline that goes from a text prompt to a full Godot 4 project, generated assets, and screenshot-based visual QA.
Google says Gemini CLI now includes a read-only Plan mode that analyzes requests, codebases, and dependencies before any edits happen. The update also adds an ask_user tool and read-only MCP access so teams can clarify requirements and pull in outside context without risking accidental changes.
OpenAI says Codex Security deliberately does not start from a SAST report because many real vulnerabilities come from broken validation order, canonicalization, and other behavioral flaws rather than simple dataflow patterns. Instead, the system starts from repository behavior and validates hypotheses with focused tests in a sandbox.
A March 16, 2026 Hacker News thread resurfaced a detailed Home Assistant community write-up that logged 310 points and 92 comments, showing how a local-first voice assistant stack can combine llama.cpp, Parakeet V2 STT, Kokoro TTS, and prompt tuning into a usable system.
A March 16, 2026 Hacker News thread pushed Mistral's Leanstral launch to 277 points and 49 comments, focusing attention on an Apache 2.0 Lean 4 model built for proof engineering rather than generic code generation.
GitHub announced a major JetBrains Copilot update on March 11, 2026. Custom agents, sub-agents, and plan agent are now generally available, while agent hooks, MCP auto-approve, and project instruction file support push the IDE further toward full agent workflows.
On March 11, 2026, NVIDIA introduced Nemotron 3 Super, an open 120-billion-parameter hybrid MoE model with 12 billion active parameters. NVIDIA says the model combines a 1-million-token context window, high-accuracy tool calling, and up to 5x higher throughput for agentic AI workloads.
On March 5, 2026, OpenAI introduced GPT-5.4 as a flagship model focused on relevance, contextual understanding, and instruction following. In the API, it pairs a 1M-token context window with stronger tool search for long, multi-tool workflows.
OpenAIDevs said on March 16, 2026 that subagents are now available in Codex. The feature lets developers keep the main context clean, split work across specialized agents, and steer individual threads as they run, while the official docs already describe PR review and CSV batch fan-out patterns.
An r/singularity post on March 13, 2026 highlighted Anthropic’s move to make 1M context generally available for Opus 4.6 and Sonnet 4.6, with standard per-token pricing, higher media limits, and automatic support in Claude Code tiers.