HN cares less that ChatGPT hit an Erdős problem than how it got there
Original: Amateur armed with ChatGPT solves an Erdős problem View original →
The Hacker News thread around this story was not really a victory lap for AI beating mathematicians. It was more interested in why working mathematicians were taking the case seriously in the first place. According to Scientific American, 23-year-old Liam Price, who does not have advanced mathematics training, entered an Erdős problem into GPT-5.4 Pro and got back a line of attack that helped crack a question that had been stuck for decades. That framing mattered to HN because it turned the story from spectacle into method.
The problem concerns primitive sets, collections of whole numbers where no member divides another, and the behavior of the Erdős sum on those sets. Scientific American reports that Price posted the solution after a single prompt, and Terence Tao said the interesting part was not brute force but the route: humans had collectively made the same wrong turn early, while the model drew on a formula known in a neighboring area of mathematics that nobody had applied to this question. That is the sort of detail HN notices fast. A different search path is more interesting than another claim that a model can imitate expertise on a benchmark.
The thread also moved quickly to the limits. One prominent comment pulled in the actual prompt. Others highlighted the article's most important caveat: the raw output was poor, and experts had to sift through it to understand what the model was trying to say. Jared Lichtman and Tao then shortened and clarified the proof. Community discussion treated that as the real story. Not autonomous theorem proving, but expert-guided extraction of a useful idea from a messy model output. That distinction is why the thread felt more grounded than many previous AI-and-math headlines.
Nobody in the thread seemed eager to overclaim. Tao said the long-term significance is still unclear, and the article is explicit that the jury is still out on how broadly the method will transfer. HN liked that restraint. The strongest takeaway was that a model may have surfaced a promising connection at the right moment, which is a different and more believable claim than saying it has replaced mathematical research. For people watching how AI might fit into science, that is the interesting shift: not magic, but a new way to poke at a problem when human intuition has stalled.
Related Articles
HN did not treat the Erdős headline as proof of autonomous math genius; the thread kept circling back to expert cleanup, problem selection, and whether the new method generalizes.
A heavily discussed HN post focused on Epoch AI’s confirmation that GPT-5.4 Pro helped solve one FrontierMath Open Problems combinatorics challenge, shifting attention from benchmark scores toward expert-verified research workflows.
A March 28 essay on the Hamilton-Jacobi-Bellman equation drew Hacker News attention by showing how continuous-time control theory connects reinforcement learning, optimal control, and diffusion models.
Comments (0)
No comments yet. Be the first to comment!