Alphabet's robotics software company Intrinsic moved from 'Other Bets' into Google proper on February 25, gaining direct access to Gemini models, Google Cloud, and DeepMind research. The strategy mirrors Android: one platform running across FANUC, KUKA, and Universal Robots hardware.
AI
Meta struck a multiyear agreement to purchase up to $100B in AMD MI450 AI chips covering ~6GW of data center power demand. Includes warrants for 160M AMD shares and accelerates Meta's 'personal superintelligence' AI roadmap.
MLU-Explain's interactive visualization demonstrates why decision trees remain one of the most powerful and interpretable tools in ML, showing how simple nested if-else rules form the foundation of modern ensemble methods.
A developer built a demo simulating ad-supported AI chat, inserting ads mid-response to illustrate the likely UX of a truly free AI model. The demo sparked 248-point discussion on Hacker News about AI monetization models.
While AI tools have accelerated code production, they have simultaneously expanded engineering responsibilities and raised unspoken expectations, driving burnout and an identity crisis among developers.
Anthropic has launched a memory import feature that lets users bring their preferences and context from other AI providers to Claude with a single copy-paste, available on all paid plans.
A developer has implemented a UEFI application that runs LLM inference directly from boot without any operating system or kernel, using zero-dependency C code for the entire stack from tokenizer to inference engine.
Google has launched Nano Banana 2, a new AI image generation model combining Pro-level capabilities with Flash-level speed, featuring advanced world knowledge and subject consistency.
Anthropic CEO Dario Amodei confirmed in a CBS interview that the company built custom Claude models for the military that are 1-2 generations ahead of consumer versions, deployed on classified cloud infrastructure.
NVIDIA revealed detailed specs for Vera Rubin NVL72. Each Rubin GPU delivers 50 PFLOPS inference (5x Blackwell GB200), 22 TB/s HBM4 bandwidth (2.8x Blackwell), and cuts inference cost per million tokens by 10x. Ships H2 2026.
The India AI Impact Summit drew pledges exceeding $250 billion, with Adani Group committing $100B for renewable-powered AI data centers and Reliance Industries pledging $110B over seven years. Microsoft also committed $50B for Global South AI infrastructure.
OpenAI closed a $110B round led by Amazon ($50B), Nvidia ($30B), and SoftBank ($30B). ChatGPT now reaches 900 million weekly active users and 50 million paying subscribers.