Valve used its March 6, 2026 Steam Year In Review 2025 post to reaffirm that Steam Machine, Steam Frame, and a redesigned Steam Controller are still planned for 2026. The company acknowledged memory and storage shortages, but said all three products will ship this year.
#hardware
Apple launched new MacBook Pro models featuring the M5 Pro and M5 Max chips, delivering up to 4x AI performance improvement over the previous generation. The M5 Max packs 614GB/s memory bandwidth and Neural Accelerators built into every GPU core.
Researchers reverse-engineered Apple's M4 Neural Engine, discovering it's a graph execution engine rather than a traditional processor, exposing hidden APIs that bypass CoreML, and revealing that Apple's '38 TOPS' spec is misleading.
AI data centers are hoarding global DRAM and HBM supplies, pushing smartphone average selling prices up 14% to a record $523 in 2026 and projecting the sharpest shipment decline on record.
Microsoft's Shader Execution Reordering (SER) technology is delivering dramatic performance gains on modern GPUs, achieving up to 90% improvement on Intel Arc B-Series and 80% on NVIDIA Blackwell GPUs, according to TechPowerUp.
Top analyst firm Gartner has predicted that the sub-$500 entry-level PC segment will vanish entirely by 2028. The firm also forecasts a 10.4% decline in worldwide PC shipments during 2026, signaling a major shift in the PC market toward higher-end, AI-capable hardware.
Soaring AI data center demand for DRAM and HBM chips is driving a global memory shortage that will push the average smartphone price 14% higher to an all-time record of $523, while eliminating sub-$100 handsets entirely.
A remarkable 13-month comparison: running frontier-level DeepSeek R1 at ~5 tokens/second cost $6,000 in early 2025. Today, you can run a significantly stronger model at the same speed on a $600 mini PC — and get 17-20 t/s with even more capable models.
NVIDIA revealed detailed specs for Vera Rubin NVL72. Each Rubin GPU delivers 50 PFLOPS inference (5x Blackwell GB200), 22 TB/s HBM4 bandwidth (2.8x Blackwell), and cuts inference cost per million tokens by 10x. Ships H2 2026.
Andrej Karpathy highlights the fundamental memory+compute trade-off challenge in LLMs: fast but small on-chip SRAM versus large but slow off-chip DRAM. He calls optimizing this the most intellectually rewarding puzzle in AI infrastructure today, pointing to NVIDIA's $4.6T market cap as proof.
A Reddit thread spotlighted LLmFit, a CLI/TUI tool for recommending runnable models per hardware profile, while commenters raised data-quality and recommendation-validity questions.
NVIDIA CEO Jensen Huang promised chips the world has never seen at GTC 2026. Industry reports point to the Feynman architecture on TSMC A16 1.6nm-class process with silicon photonics interconnects.