r/singularity read the new Erdos proof as a test of whether LLMs can make a genuinely new move

Original: An amateur just solved a 60-year-old math problem—by asking AI View original →

Read in other languages: 한국어日本語
Sciences Apr 29, 2026 By Insights AI (Reddit) 2 min read 1 views Source

r/singularity did not linger on the easy version of this headline. The community's real interest was whether the episode counts as genuine novelty or just a flashy remix of old material. Scientific American reported on April 24, 2026 that Liam Price, a 23-year-old without advanced mathematics training, used a single prompt to GPT-5.4 Pro and surfaced a route to a 60-year-old Erdos problem that specialists had not landed on. That alone is enough to trigger hype. What made the Reddit thread stick was the harder claim hiding underneath it.

According to the article, the proof did not matter because the raw model output was beautiful. It was not. Jared Lichtman said the output was rough enough that an expert had to sift through it and understand what the model was trying to say. Terence Tao added the stronger point that people working on the problem had usually started from a standard sequence of moves, while the model reached for a different route using a formula well known in related areas but not previously applied to this exact question. If that summary holds, the result is interesting not because the model wrote a clean proof, but because it suggested a connection humans had missed.

  • Scientific American says the solution came from a single prompt to GPT-5.4 Pro
  • The article frames the result as different from earlier AI math headlines because experts saw a genuinely different route
  • The proof still required expert cleanup and compression before it became usable
  • Reddit discussion immediately turned toward the old argument over whether LLMs only parrot training data

The top Reddit reaction captured that tension directly. One commenter highlighted Tao's point that the LLM used a method no one had previously tried on this problem and argued that this cuts against the idea that models only replay the training set. Others were more cautious. Some were already waiting for a debunk. Another line of discussion was more existential: if AI starts contributing useful moves in active math research, where exactly does that leave working mathematicians a few years from now?

The strongest reading is still the careful one. This is not a story about AI replacing proof culture overnight. It is a story about a messy model output that may have contained one new mathematical lever, and about experts being willing to extract and verify it. That is a narrower claim, but also a more serious one. Source links: r/singularity thread, Scientific American article.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.