Mistral announced Mistral Small 4 on March 16, 2026 as a single open model that combines reasoning, multimodal input, and agentic coding. Key specs include 119B total parameters, 6B active parameters per token, a 256k context window, Apache 2.0 licensing, and configurable reasoning effort.
LLM
RSS FeedMistral introduced Leanstral on March 16, 2026 as an open-source code agent built specifically for Lean 4. The release combines 6B active parameters, an Apache 2.0 license, a new FLTEval benchmark, and immediate availability in Mistral Vibe, API form, and downloadable weights.
Cursor said on March 26, 2026 that real-time reinforcement learning lets it ship improved Composer checkpoints as often as every five hours. Cursor's research post says the loop trains on billions of production tokens from real user interactions, runs evals including CursorBench before deployment, and has already shown gains in edit persistence, dissatisfied follow-ups, and latency.
OpenAIDevs pointed developers to Codex Security on March 29, 2026, positioning it as a way to find, validate, and remediate likely vulnerabilities in connected GitHub repositories. OpenAI's docs say the system scans commit by commit, uses repo-specific threat models, validates high-signal findings in an isolated environment, and can move reviewed findings toward GitHub pull requests.
A new r/MachineLearning post pushes TurboQuant beyond KV-cache talk and into weight compression, with a GitHub implementation that targets drop-in low-bit LLM inference.
A LocalLLaMA post points to IBM's Granite-4.0-3B-Vision, a compact VLM built for charts, tables, and document key-value extraction rather than generic multimodal chat.
A high-scoring r/LocalLLaMA post explains TurboQuant not as a polar-coordinates trick but as random rotation before quantization. The linked arXiv paper claims near-optimal distortion rates, a residual QJL stage for inner products, and quality-neutral KV cache quantization at 3.5 bits per channel.
GitHub said on March 28, 2026 that Copilot CLI can create a robust test suite from the terminal by combining plan mode, /fleet, and autopilot. The linked GitHub docs describe /fleet as parallel subagent execution and autopilot as autonomous multi-step completion, making the post a concrete example of multi-agent testing workflows in the CLI.
A March 28, 2026 r/LocalLLaMA post turned TurboQuant from a paper topic into an MLX implementation story with custom Metal kernels, code, and an upstream PR. The author reports 4.6x KV cache compression at 0.98x FP16 speed on Qwen2.5-32B, but the repository's 7B README numbers are more conservative, underscoring how model choice and integration details shape the real payoff.
OpenAI announced plans to acquire Promptfoo on March 9, 2026. The company says Promptfoo’s security testing and evaluation technology will be integrated into OpenAI Frontier so enterprises can test and document risks such as prompt injection, jailbreaks, data leaks, and tool misuse earlier in the development cycle.
OpenAI announced GPT-5.4 mini and nano on March 17, 2026. The company says mini is more than 2x faster than GPT-5 mini while improving coding, reasoning, multimodal understanding, and tool use, while nano targets low-cost classification, extraction, ranking, and simpler coding subagents.
GoogleCloudTech posted a demo on March 27, 2026 showing Gemini CLI using Model Context Protocol (MCP) servers to migrate and deploy a full-stack application. Google's September 11, 2025 Gemini CLI extensions post and December 11, 2025 MCP support announcement show that the demo is built on /deploy for Cloud Run, managed MCP endpoints for Google services, and enterprise controls such as IAM, audit logs, and Model Armor.