Thermodynamic Computer Generates Images Using Far Less Energy Than AI Models
Original: 'Thermodynamic computer' can mimic AI neural networks — using orders of magnitude less energy to generate images View original →
Overview
A team of researchers has demonstrated that a thermodynamic computer — one that leverages the physics of noise and energy minimization — can generate images in a manner analogous to AI neural networks, but using dramatically less energy. The system generates images from random noise, mirroring diffusion-based AI models without relying on conventional GPU computation.
How It Works
Traditional generative AI models, such as diffusion models, start from pure noise and iteratively refine it into a coherent image through many computational steps. The thermodynamic computer performs a physically analogous process: it starts in a high-noise thermal state and naturally evolves toward a lower-energy configuration that corresponds to a meaningful image. This process is driven by thermodynamic physics rather than programmed computation, resulting in vastly lower energy requirements.
The Energy Problem
The rapid growth of generative AI has come with significant energy costs. Training and running large AI models — whether for language, images, or video — requires enormous amounts of electricity, contributing to growing data center carbon footprints. Thermodynamic computing represents one of the more radical approaches being explored to address this challenge at the hardware level.
Current Limitations
The technology is still early-stage, and the images produced are not yet comparable in quality or complexity to GPU-based generative models. However, the demonstrated energy difference of orders of magnitude suggests substantial long-term potential for reducing the infrastructure costs and environmental impact of AI at scale.
Related Articles
ByteDance officially launched Seedance 2.0, its AI video generation model. Game Science's CEO called it 'the strongest video-generation model on the planet,' while strict real-person content restrictions were implemented.
EPFL researchers have developed a method that essentially eliminates drift in generative video, enabling stable, high-quality videos lasting several minutes without increased computational demands. To be presented at ICLR 2026.
NVIDIA CEO Jensen Huang promised chips the world has never seen at GTC 2026. Industry reports point to the Feynman architecture on TSMC A16 1.6nm-class process with silicon photonics interconnects.
Comments (0)
No comments yet. Be the first to comment!