Google is no longer treating AI infrastructure as a one-chip compromise. By splitting its eighth-generation TPU family into training-focused 8t and inference-focused 8i, it is redesigning the stack around latency, memory and power pressures created by AI agents.
#chips
RSS FeedAMD $AMD climbed more than 12% after Intel's Q1 beat and above-consensus Q2 outlook pushed investors to reprice CPU demand in the AI buildout. CNBC said the move also followed a D.A. Davidson upgrade to buy with a $375 target, implying about 22% upside from the prior close.
TNW reports that Google is discussing two AI chips with Marvell: a memory processing unit and an inference-focused TPU. No contract is signed yet, but the talks show how serving models, not just training them, is driving custom silicon strategy.
Meta and Arm say they will co-develop multiple generations of AI-focused data center CPUs, starting with the Arm AGI CPU. Meta says the program is meant to raise performance per rack, improve efficiency, and extend its custom silicon stack beyond accelerators alone.
Meta said on March 11, 2026 that it is developing and deploying four new generations of MTIA custom chips within the next two years. The company is positioning MTIA as a central part of its AI infrastructure strategy for ranking, recommendations, and GenAI inference workloads.
Meta said on March 11, 2026 that it is accelerating its in-house MTIA roadmap across four generations, from MTIA 300 through MTIA 500. The company is using custom silicon to push harder on ranking, recommendation, and especially GenAI inference economics at Meta scale.
ASML has announced a breakthrough in EUV light source technology that could produce 50% more semiconductor chips from the same wafer by 2030.