Skip to content

The Proof That ML Cannot Reach Human-Level Performance Has Been Debunked

Original: Human-level performance via ML was *not* proven impossible with complexity theory View original →

Read in other languages: 한국어日本語
AI May 14, 2026 By Insights AI (Reddit) 1 min read Source

The Claim That Made Waves

In 2024, Van Rooij et al. published a paper in Computational Brain & Behavior claiming to prove that learning a human-level classifier via machine learning is computationally impossible. The argument, dubbed the "Ingenia Theorem," reduced a known NP-hard problem to the ML learning problem — and made a notable splash in AI research circles.

The Rebuttal

A new paper, now published in the same journal, demonstrates that the Ingenia Theorem's proof is irreparably broken. The author shows that the reduction construction contains fundamental flaws that cannot be patched.

What This Means for AGI Research

The debunking removes one of the few formal theoretical arguments that ML cannot achieve human-level performance. This doesn't prove AGI is achievable — that remains an open question — but it does close off one specific theoretical roadblock that some cited as a hard ceiling.

The paper is available open access via Springer. The r/MachineLearning community gave the post 133 upvotes, reflecting genuine interest from researchers who had followed the original Ingenia Theorem controversy.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment