r/singularity loved the spectacle, but the useful tension was in the caveats: autonomous navigation, race rules, battery and cooling support, and whether endurance is the right comparison.
#robotics
RSS Feedr/singularity reacted because the clip made humanoid robotics feel less like a polished demo and more like motors, heat, and maintenance. A Beijing robot half-marathon pit stop showed ice cooling the battery and lubricant going onto joints, turning the thread into jokes plus real hardware curiosity.
AGIBOT used APC 2026 to put a full embodied AI stack on the table: 5 robot platforms, 8 AI products, and a data system for training physical models. The notable part is scale context: the company says it rolled out its 10,000th robot in March 2026.
r/singularity reacted because the video made humanoid progress feel physical, not just benchmarked. A Unitree H1 test run for the April 19 Beijing humanoid robot half-marathon showed a visible transition from jogging into faster running.
HN liked the duct-tape energy of AutoProber, but the thread quickly moved from demo awe to safety and precision. A CNC, microscope, oscilloscope, and agent workflow can be compelling; it also makes every millimeter and stop condition matter.
Physical Intelligence says π0.7 shows early compositional generalization, following new language commands and performing tasks not seen in training. In laundry folding, it matched expert teleoperators’ zero-shot success on a UR5e setup without task data for that robot.
r/singularity latched onto two things at once: the claim of one humanoid robot every 30 minutes, and the visible question of how automated the factory actually is. The Leju Robotics clip fed the robots-building-robots imagination, while the top comment immediately pointed at human hands in the assembly flow.
Why it matters: NVIDIA is aiming generative video research at simulation-ready 3D environments rather than short clips. The tweet says Lyra 2.0 maintains per-frame 3D geometry and uses self-augmented training, while the project page shows outputs as Gaussian splats and meshes that can be exported to Isaac Sim.
r/singularity did not read an 88% fail rate as pure failure; many users saw the same number as a 12% foothold, while others warned that benchmark age and missing robot platforms matter.
r/singularity reacted less to another humanoid walking clip and more to the fault-tolerance angle. The Figure 03 balance-policy demo asks whether a robot can stay useful, or at least safe, after partial hardware failure.
HN focused less on the model drop and more on the hard robotics question: how fast does reasoning need to be before it is useful in the physical world? Google DeepMind frames Gemini Robotics-ER 1.6 around spatial reasoning, multi-view understanding, success detection, and instrument reading, while commenters zoomed in on gauge-reading demos, latency, and deployment reality.
Google DeepMind's latest robotics model pushes a hard industrial task from 23% to 93% accuracy when agentic vision is enabled, putting a concrete number on embodied reasoning progress. The April 14 release also puts Gemini Robotics-ER 1.6 into the Gemini API and Google AI Studio, so developers can test the upgrade immediately.