Hacker News latched onto this paper because it was not selling a new benchmark or model, but a bigger claim: deep learning may finally be mature enough for a real scientific theory. That mix of excitement and skepticism kept the thread moving.
HN did not treat the Erdős headline as proof of autonomous math genius; the thread kept circling back to expert cleanup, problem selection, and whether the new method generalizes.
r/MachineLearning pushed this paper up because it did not promise a miracle. It argued that deep learning theory is finally accumulating enough converging evidence to resemble a genuine scientific program, and commenters liked the paper's concrete framing more than another grand AI manifesto.
r/MachineLearning found the 1,200-paper list useful, but the thread immediately separated “has a link” from “can reproduce the result.” Comments pointed to missing papers, 404s, and the gap between public code and runnable research.
The paper drew attention because it challenges today’s data appetite, but the comments quickly tested the comparison to children.
OpenAI put GPT-Rosalind into research preview for qualified life-science teams, pairing a domain model with a Codex plugin that connects to more than 50 tools and data sources. The strongest signal is not the branding: OpenAI says best-of-ten submissions ranked above the 95th percentile of human experts on one Dyno Therapeutics RNA prediction task.
r/MachineLearning reacted because the sample was small but painfully familiar: one user said 4 of 7 paper claims they checked this year did not reproduce, with 2 still sitting as unresolved GitHub issues. The comments moved from resignation about reviewers not running code to concrete demands for submission-time reproducibility reports.
OpenAI is moving model specialization into scientific work rather than generic chat. GPT-Rosalind is framed for protein reasoning, chemical reasoning, genomics, biochemistry and tool use, with access starting as a research preview for qualified customers including Amgen and Moderna.
Why it matters: NVIDIA is turning quantum calibration and error correction into an open model-and-tooling stack instead of a lab-only workflow. The April 14 tweet framed Ising as an open suite, and NVIDIA’s technical post says Ising Calibration 1 scored 14.5% above GPT-5.4 and 3.27% above Gemini 3.1 Pro on QCalEval.
NVIDIA is turning quantum chip calibration and error correction into an open AI stack, with one model family that beats GPT 5.4 on QCalEval and another that speeds decoding by 2.25x. If those gains travel outside NVIDIA's own workflow, one of quantum computing's nastiest software bottlenecks just moved closer to something teams can actually deploy.
JAMA highlighted on April 3, 2026 a multisite study finding AI scribe adoption across 5 academic centers was associated with 13.4 fewer EHR minutes, 16.0 fewer documentation minutes, and 0.49 more weekly visits. The effect was modest overall but larger for primary care, advanced practice clinicians, women, and heavier users.
OpenAI says ChatGPT is already being used at research scale across science and mathematics. In its January 2026 report, the company says advanced science and math usage reached nearly 8.4 million weekly messages from roughly 1.3 million weekly users, with early evidence that GPT-5.2 is contributing to serious mathematical work.