A LocalLLaMA thread amplified Phoronix coverage of GreenBoost, an experimental GPLv2 Linux module that adds a multi-tier memory path for NVIDIA GPUs. The design pairs a kernel module with a CUDA shim so large allocations can spill from limited on-card vRAM into pinned system RAM and NVMe-backed storage without modifying CUDA applications.
LLM
On March 11, 2026, OpenAI published new guidance on designing AI agents to resist prompt injection, framing untrusted emails, web pages, and other inputs as a core security boundary. The company says robust agents separate data from instructions, minimize privileges, and require monitoring and user confirmation before taking consequential actions.
GitHub said on March 10, 2026 that GitHub Copilot, VS Code, and Figma now form a continuous loop through the bidirectional Figma MCP server. GitHub’s March 6 changelog says users can pull design context into code and send rendered UI back to Figma as editable frames.
Perplexity said on March 13, 2026 that Perplexity Computer is now available on mobile, starting with iOS inside the Perplexity app. Coming one day after the company opened Computer to Pro subscribers, the update turns the product into a more explicit cross-device agent workflow rather than a desktop-only experience.
A March 15, 2026 LocalLLaMA post pointed to Hugging Face model-card commits and NVIDIA license pages showing Nemotron Super 3 models moving from the older NVIDIA Open Model License text to the newer NVIDIA Nemotron Open Model License.
A March 14, 2026 Hacker News discussion highlighted a more nuanced MCP argument: local stdio MCP can be unnecessary overhead for bespoke tools, while remote HTTP MCP still solves auth, telemetry, and shared tooling at team scale.
On March 9, 2026, OpenAI said it plans to acquire Promptfoo and integrate its AI security tooling into OpenAI Frontier. The move pushes security testing, red-teaming, and governance closer to the default workflow for enterprise agents.
OpenAI said on March 5, 2026 that GPT-5.4 Thinking shows low Chain-of-Thought controllability, which for now strengthens CoT monitoring as a safety signal. The release pairs an X post with a new open-source evaluation suite and research paper.
Community discussion in LocalLLaMA pointed to a March 11, 2026 FastFlowLM and Lemonade update that brings Linux support to AMD XDNA 2 NPUs, including setup guidance for Ubuntu and Arch systems.
A March 14, 2026 LocalLLaMA post outlined a CUTLASS and FlashInfer patch for SM120 Blackwell workstations, claiming major gains for Qwen3.5-397B NVFP4 inference and linking the work to FlashInfer PR #2786.
StepFun opened more than a model card by releasing the Step-3.5-Flash-SFT dataset on Hugging Face. The repo bundles raw JSON data, tokenizer snapshots, and StepTronOSS-oriented compiled shards, while the Reddit discussion focused on reproducibility, reasoning traces, and the implications of the dual-license setup.
A r/LocalLLaMA field report showed how a very specific local inference workload was tuned for throughput. The author reported about 2,000 tokens per second while classifying markdown documents with Qwen 3.5 27B, and the comment thread turned the post into a practical optimization discussion.