Cohere announced Transcribe on March 26, 2026 as an open-source speech recognition model. Cohere says the 2B Conformer-based system supports 14 languages, tops the Hugging Face Open ASR Leaderboard with 5.42 average WER, ships under Apache 2.0, and is available for download, API use, and Model Vault deployment.
AI
RSS FeedThe GitHub repo and arXiv paper drew attention because they present self-improvement as editable code rather than a slogan. A task agent and a meta agent live inside one program, and the improvement procedure itself can be rewritten.
George Larson's post stood out on Hacker News less as a demo and more as a deliberate agent architecture: tiny runtime, public/private separation, tiered inference, and explicit blast-radius control.
Anthropic said on Mar 11, 2026 that it is launching The Anthropic Institute to study the biggest economic, security, legal, and societal questions raised by frontier AI. The effort is meant to turn observations from inside a model builder into public research and external dialogue.
GitHub updated its Privacy Statement and Terms of Service on March 25, 2026 to allow training and product improvement on Copilot Free, Pro, and Pro+ interaction data. The changes take effect on April 24, while Copilot Business and Enterprise accounts are excluded.
On March 25, 2026, OpenAI launched a public Safety Bug Bounty focused on AI abuse and safety risks. The new track complements its security program by accepting AI-specific failures such as prompt injection, data exfiltration, and harmful agent behavior.
A high-signal LocalLLaMA thread formed around Voxtral TTS because Mistral paired low latency, multilingual support, and open weights in a part of the stack many teams still keep closed.
Thinking Machines Lab said it signed a multi-year strategic partnership with NVIDIA to deploy at least one gigawatt of next-generation Vera Rubin systems. The companies also plan to co-design training and serving systems and widen access to frontier AI and open models for enterprises, research institutions, and the scientific community.
Google DeepMind said on March 26, 2026 that it is releasing research on how conversational AI might exploit emotions or manipulate people into harmful choices. The company says it built the first empirically validated toolkit to measure harmful AI manipulation, based on nine studies with more than 10,000 participants across the UK, the US, and India.
ARC Prize says ARC-AGI-3 is an interactive reasoning benchmark that measures planning, memory compression, and belief updating inside novel environments rather than static puzzle answers. Hacker News pushed the launch because it gives agent builders a more behavior-first way to compare systems against humans.
Anthropic says its dispute with the Department of War centers on two requested exceptions: mass domestic surveillance of Americans and fully autonomous weapons. The company also says any formal designation should not affect commercial customers or non-DoW work.
Anthropic and Rwanda say a new 3-year agreement will expand Claude into national health, education, and public-sector workflows. The deal combines deployment with training, API credits, and local capacity building rather than simple tool access alone.