r/singularity loved the spectacle, but the useful tension was in the caveats: autonomous navigation, race rules, battery and cooling support, and whether endurance is the right comparison.
r/singularity reacted because the clip made humanoid robotics feel less like a polished demo and more like motors, heat, and maintenance. A Beijing robot half-marathon pit stop showed ice cooling the battery and lubricant going onto joints, turning the thread into jokes plus real hardware curiosity.
AGIBOT used APC 2026 to put a full embodied AI stack on the table: 5 robot platforms, 8 AI products, and a data system for training physical models. The notable part is scale context: the company says it rolled out its 10,000th robot in March 2026.
r/singularity reacted because the video made humanoid progress feel physical, not just benchmarked. A Unitree H1 test run for the April 19 Beijing humanoid robot half-marathon showed a visible transition from jogging into faster running.
Physical Intelligence says π0.7 shows early compositional generalization, following new language commands and performing tasks not seen in training. In laundry folding, it matched expert teleoperators’ zero-shot success on a UR5e setup without task data for that robot.
r/singularity latched onto two things at once: the claim of one humanoid robot every 30 minutes, and the visible question of how automated the factory actually is. The Leju Robotics clip fed the robots-building-robots imagination, while the top comment immediately pointed at human hands in the assembly flow.
r/singularity did not read an 88% fail rate as pure failure; many users saw the same number as a 12% foothold, while others warned that benchmark age and missing robot platforms matter.
r/singularity reacted less to another humanoid walking clip and more to the fault-tolerance angle. The Figure 03 balance-policy demo asks whether a robot can stay useful, or at least safe, after partial hardware failure.
Google DeepMind and Boston Dynamics are showing a clearer bridge between foundation models and robot APIs. The demo gives Spot tools for movement, photos and grasping, then lets Gemini Robotics plan from plain-English tasks.
HN focused less on the model drop and more on the hard robotics question: how fast does reasoning need to be before it is useful in the physical world? Google DeepMind frames Gemini Robotics-ER 1.6 around spatial reasoning, multi-view understanding, success detection, and instrument reading, while commenters zoomed in on gauge-reading demos, latency, and deployment reality.
Google DeepMind's latest robotics model pushes a hard industrial task from 23% to 93% accuracy when agentic vision is enabled, putting a concrete number on embodied reasoning progress. The April 14 release also puts Gemini Robotics-ER 1.6 into the Gemini API and Google AI Studio, so developers can test the upgrade immediately.
Google DeepMind is pushing embodied reasoning closer to deployable robotics, not just lab demos. In the linked thread and blog post, Gemini Robotics-ER 1.6 reaches 93% on instrument reading with agentic vision and improves injury-risk detection in video by 10% over Gemini 3.0 Flash.