OpenAI said Codex Security is rolling out in research preview via Codex web. The company positioned it as a context-aware application security agent that reduces noise while surfacing higher-confidence findings and patches.
#open-source
A popular r/LocalLLaMA thread points to karpathy/autoresearch, a small open-source setup where an agent edits one training file, runs 5-minute experiments, and iterates toward lower validation bits per byte.
A high-signal Hacker News thread surfaced an essay arguing that AI-assisted clean-room rewrites may be legal without being socially legitimate, using the chardet 7.0 relicensing fight as the case study.
Andrej Karpathy has published autoresearch, a minimal repo that lets AI agents iterate on a stripped-down nanochat training loop overnight. The project turns agent evaluation into a closed-loop research workflow with fixed 5-minute runs, Git branches, and validation-loss-based selection.
A satirical HN hit called RFC 406i turns maintainer frustration with low-effort AI submissions into a standard rejection link and a sharper statement about review burden.
A high-engagement r/MachineLearning discussion introduced IronClaw, a Rust-based AI agent runtime designed around sandboxed tool execution, encrypted credential handling, and database-backed policy controls. The post landed because it treats agent security as a systems problem instead of a prompt-only problem.
A well-received HN post highlighted Sarvam AI’s decision to open-source Sarvam 30B and 105B, two reasoning-focused MoE models trained in India under the IndiaAI mission. The announcement matters because it pairs open weights with concrete product deployment, inference optimization, and unusually strong Indian-language benchmarks.
OpenAI announced Codex for Open Source on March 6, 2026, pitching the program as practical support for maintainers who review code, manage large repositories, and handle security work. The program combines API credits, six months of ChatGPT Pro with Codex, and conditional Codex Security access for eligible projects.
A high-scoring r/LocalLLaMA post details a practical move from Ollama/LM Studio-centric flows to llama-swap for multi-model operations. The key value discussed is operational control: backend flexibility, policy filters, and low-friction service management.
Google detailed new global conservation outcomes from SpeciesNet on March 6, 2026. The open-source model identifies nearly 2,500 animal categories from camera-trap imagery and is now being adapted by field teams across multiple regions.
Google detailed new global conservation outcomes from SpeciesNet on March 6, 2026. The open-source model identifies nearly 2,500 animal categories from camera-trap imagery and is now being adapted by field teams across multiple regions.
A LocalLLaMA post details recurring Whisper hallucinations during silence and proposes a layered mitigation stack including Silero VAD gating, prompt-history reset, and exact-string blocking.