Sam Altman Compares AI Training Energy to the Cost of Raising a Human
Original: SAM ALTMAN: "People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart." View original →
Altman on AI Energy: A Surprising Comparison
OpenAI CEO Sam Altman responded to growing criticism of AI's energy footprint with an unexpected analogy, earning over 3,100 upvotes on Reddit's r/singularity.
What Altman Said
"People talk about how much energy it takes to train an AI model… But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart."
Context
The remark comes amid intensifying scrutiny of AI data centers' energy consumption and carbon footprints. With major AI companies building enormous new data centers and striking deals with nuclear energy providers, the energy question has become a central debate in the industry.
Community Reaction
Responses were divided. Some found the analogy a novel framing that puts AI energy costs in perspective. Critics countered that the comparison breaks down for several reasons: AI models can be copied infinitely, the scale of energy consumption is orders of magnitude different when accounting for millions of users, and AI training often depends heavily on fossil-fuel-powered grids. The analogy also doesn't address whether the energy expenditure is justified relative to the benefit delivered.
Whether or not you find the comparison apt, Altman's remarks highlight the growing tension between AI's ambitions and its environmental costs — a conversation that will only intensify as models continue to scale.
Related Articles
Why it matters: OpenAI is moving ChatGPT from assistant responses into shared agents that run workflows across company tools. The research preview covers 4 plan families: Business, Enterprise, Edu, and Teachers.
OpenAI’s April 21 system card puts concrete safety numbers behind ChatGPT Images 2.0, including 6.7% policy-violating generations before final blocking in thinking mode. The card matters because higher realism, web-grounded image reasoning, biorisk prompts, and provenance are now treated as one deployment problem.
HN focused less on the demo reel and more on whether the model can obey dense prompts. ChatGPT Images 2.0 arrived with broader style, multilingual text, and layout examples, but the thread quickly moved into prompt adherence, pricing, and synthetic media fatigue.
Comments (0)
No comments yet. Be the first to comment!