NVIDIA CEO Jensen Huang promised chips the world has never seen at GTC 2026. Industry reports point to the Feynman architecture on TSMC A16 1.6nm-class process with silicon photonics interconnects.
AI
RSS FeedMeta announced a multi-year infrastructure partnership with AMD, targeting up to 6GW of AMD Instinct GPU capacity for AI workloads. The agreement also aligns roadmaps across silicon, systems, and software, with first deployments expected in the second half of 2026.
Anthropic announced Responsible Scaling Policy (RSP) 3.0 on February 24, 2026. The update keeps the original threshold-based safety logic but adds clearer unilateral commitments, a Frontier Safety Roadmap, and structured Risk Reports to improve transparency and accountability.
A r/singularity thread boosted attention on an arXiv paper studying hallucination-associated neurons in LLMs. The authors report that a very small subset of neurons can predict hallucination behavior and may be causally involved.
A Show HN post spotlighted Moonshine Voice, an open-source speech toolkit claiming strong accuracy and latency across edge and desktop devices. The project positions itself as a practical alternative to larger Whisper deployments for real-time voice apps.
Microsoft announced on February 17, 2026 that it will invest an additional $50 billion over five years to reduce AI inequality across the Global South. The plan combines infrastructure expansion, ecosystem development, and workforce skilling.
OpenAI announced Frontier Alliances on February 23, 2026, positioning a partner-led model for enterprise AI transformation. The program formalizes collaboration across strategy, implementation, and domain workflows.
Anthropic released Responsible Scaling Policy v3.0 on February 24, 2026. The update formalizes ASL-3 warning thresholds and expands operational governance for high-consequence misuse risks.
Anthropic analyzed millions of real Claude interactions and found the 99.9th percentile session duration nearly doubled to 45+ minutes in 3 months, with software engineering accounting for nearly half of all agentic use.
Anthropic published a new theory explaining why AI assistants like Claude express emotions and use anthropomorphic language—proposing that models select from personas inherited from fictional characters during training.
OpenAI introduced EVMbench, a new benchmark measuring how well AI agents can detect, exploit, and patch high-severity smart contract vulnerabilities in EVM-based blockchains.
Anthropic revealed that Chinese AI labs DeepSeek, Moonshot AI, and MiniMax created over 24,000 fraudulent accounts and generated 16+ million Claude exchanges to extract its capabilities and improve their own competing models.