Sam Altman Sets New AGI Timeline: 'Most of Humanity's Intellectual Capacity Inside Data Centers by End of 2028'
Original: Sam sets a new date for AGI; "by the end of 2028, most of humanity's intellectual capacity could reside inside data centers rather than outside them" View original →
Altman's New AGI Prediction
OpenAI CEO Sam Altman has issued his most specific AGI timeline yet: by the end of 2028, he suggests, "most of humanity's intellectual capacity could reside inside data centers rather than outside them." The statement sparked massive discussion on r/singularity, garnering over 530 upvotes and hundreds of comments.
Parsing the Language
Altman's phrasing is carefully constructed. The qualifiers — "most," "could," "rather than" — give him rhetorical flexibility. But taking the statement at face value, he's suggesting AI systems may, within approximately three years, exceed the aggregate cognitive output of the human species. This is a remarkable claim even by the ambitious standards of Silicon Valley AI executives.
History of AGI Predictions
AGI timelines have been famously unreliable, repeatedly predicted and repeatedly missed over the past 70 years of AI research. Altman himself has made previous timeline references. What makes this claim notable is its specificity — a named year (2028), a defined threshold (most of humanity's intellectual capacity), and a concrete mechanism (data centers). It's no longer vague "soon" territory.
Community Reactions
The r/singularity community responded with a mix of excitement, skepticism, and critique. Supporters cite the rapid progress of GPT-4, Claude 3, and now emergent multimodal reasoning as evidence the timeline is plausible. Critics point out that comparing AI to "human intellectual capacity" is a categorical error — humans don't just compute, they embody, experience, and act in the physical world. Several commenters drew unfavorable comparisons to Elon Musk's pattern of bold predictions that rarely materialize on schedule.
What This Signals
Whether or not Altman's prediction proves accurate, it reveals the internal confidence at OpenAI and is likely to accelerate both investment and regulatory attention. With 2028 less than three years away, AGI is no longer a distant philosophical abstraction — it's a near-term business and policy question.
Related Articles
DeepMind CEO Demis Hassabis proposed a concrete AGI benchmark: train an AI with a knowledge cutoff of 1911, then see if it can independently derive general relativity as Einstein did in 1915. This test targets genuine scientific discovery rather than pattern matching.
OpenAI announced on X that Codex Security has entered research preview. The company positions it as an application security agent that can detect, validate, and patch complex vulnerabilities with more context and less noise.
OpenAI said on X on March 9 that it plans to acquire Promptfoo, an AI security platform, and keep the project open source. The deal strengthens OpenAI Frontier’s agentic testing and evaluation stack.
Comments (0)
No comments yet. Be the first to comment!