A DeepMind Scientist’s Anti-LLM-Consciousness Paper Hits Reddit’s Nerve

Original: Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' View original →

Read in other languages: 한국어日本語
AI Apr 19, 2026 By Insights AI (Reddit) 2 min read 1 views Source

The argument is not just that today’s models are not conscious

The r/singularity thread centered on Alexander Lerchner’s paper The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness. The Reddit title framed it as a Google DeepMind senior scientist challenging the idea that large language models could achieve consciousness. The paper’s actual target is broader: computational functionalism, the view that the right abstract causal structure is enough for subjective experience, regardless of physical substrate.

Lerchner’s argument is that computation is not an intrinsic physical process in the way functionalist arguments often assume. Continuous physical dynamics have to be partitioned into finite, meaningful states before they count as symbols. The paper calls the active agent doing that partitioning a mapmaker. On this view, a digital system can simulate behavior through symbol manipulation, but that does not mean it instantiates the intrinsic physical constitution required for experience. The paper also avoids a simple biology-only claim: if an artificial system were conscious, it would be because of its specific physical constitution, not because syntax alone scaled high enough.

The comments showed why the topic keeps detonating in AI communities. One highly rated response mocked the confidence of Reddit users dismissing a researcher with long experience in computational neuroscience and DeepMind. Other commenters went the opposite direction, arguing that consciousness still needs a clearer definition before anyone can make strong claims. Some saw the paper as a reworked Chinese Room-style argument. Others objected that scientists often step into philosophy while under-engaging with earlier philosophical work.

That clash is the real story. LLM consciousness debates are no longer only about whether a chatbot says it has feelings. They are about what counts as computation, whether symbol processing can ground semantics, and whether behavior can ever settle questions about experience. Those questions matter for AI welfare arguments, anthropomorphism, and how seriously society should treat model self-reports.

The Reddit thread did not resolve the issue, but it captured the pressure point. As models become more fluent, the community is not becoming less philosophical. It is becoming more demanding about the assumptions underneath words like intelligence, simulation, embodiment, and sentience.

Source: r/singularity discussion and paper PDF.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.