NVIDIA Teases World-Surprising Feynman AI Chip at GTC 2026
NVIDIA CEO Jensen Huang promised the tech world will be "surprised" at GTC 2026, scheduled for March 16-19 in San Jose. Industry analysts believe the next-generation Feynman architecture will be the headline reveal.
According to TrendForce on February 25, 2026, Feynman will be NVIDIA's first chip on TSMC's A16 (1.6nm-class) node with Super Power Rail technology. Key expected specs:
- Process: TSMC A16 (1.6nm-class, Super Power Rail)
- Memory: HBM4 / HBM4E
- Interconnects: Silicon photonics for optical rack-scale links
- I/O die: Intel 14A or 18A + EMIB packaging (rumored)
GTC will also update on the Vera Rubin platform (R100), already in mass production on 3nm. Samsung shipped its first HBM4 on February 12; SK hynix is expected to supply about two-thirds of NVIDIA's HBM4 demand in 2026.
Feynman production is projected to start in 2028, with customer shipments into 2029-2030. GTC 2026 serves as a roadmap preview rather than an immediate launch, but will define the AI accelerator landscape for the next decade.
Source: TrendForce
Related Articles
HN latched onto the RAM shortage because the uncomfortable link is physical: HBM demand for AI data centers is now shaping prices for phones, laptops, and handhelds.
NVIDIA released Nemotron-Personas-Korea on Hugging Face with 7 million synthetic personas grounded in Korean public statistics. The dataset matters because agent localization is no longer only translation; it needs region, honorifics, occupations, and public-service context.
Google has redesigned its TPU roadmap around agent workloads instead of one-size-fits-all acceleration. TPU 8t targets giant training runs with nearly 3x per-pod compute and 121 exaflops, while TPU 8i focuses on low-latency inference with 19.2 Tb/s interconnect and up to 5x lower on-chip latency for collectives.
Comments (0)
No comments yet. Be the first to comment!