A DeepMind Scientist’s Anti-LLM-Consciousness Paper Hits Reddit’s Nerve
Original: Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' View original →
The argument is not just that today’s models are not conscious
The r/singularity thread centered on Alexander Lerchner’s paper The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness. The Reddit title framed it as a Google DeepMind senior scientist challenging the idea that large language models could achieve consciousness. The paper’s actual target is broader: computational functionalism, the view that the right abstract causal structure is enough for subjective experience, regardless of physical substrate.
Lerchner’s argument is that computation is not an intrinsic physical process in the way functionalist arguments often assume. Continuous physical dynamics have to be partitioned into finite, meaningful states before they count as symbols. The paper calls the active agent doing that partitioning a mapmaker. On this view, a digital system can simulate behavior through symbol manipulation, but that does not mean it instantiates the intrinsic physical constitution required for experience. The paper also avoids a simple biology-only claim: if an artificial system were conscious, it would be because of its specific physical constitution, not because syntax alone scaled high enough.
The comments showed why the topic keeps detonating in AI communities. One highly rated response mocked the confidence of Reddit users dismissing a researcher with long experience in computational neuroscience and DeepMind. Other commenters went the opposite direction, arguing that consciousness still needs a clearer definition before anyone can make strong claims. Some saw the paper as a reworked Chinese Room-style argument. Others objected that scientists often step into philosophy while under-engaging with earlier philosophical work.
That clash is the real story. LLM consciousness debates are no longer only about whether a chatbot says it has feelings. They are about what counts as computation, whether symbol processing can ground semantics, and whether behavior can ever settle questions about experience. Those questions matter for AI welfare arguments, anthropomorphism, and how seriously society should treat model self-reports.
The Reddit thread did not resolve the issue, but it captured the pressure point. As models become more fluent, the community is not becoming less philosophical. It is becoming more demanding about the assumptions underneath words like intelligence, simulation, embodiment, and sentience.
Source: r/singularity discussion and paper PDF.
Related Articles
In a 1247-point Hacker News thread, AISLE argued that small open-weight models can recover much of Mythos-style exploit analysis when the context is tightly scoped, and the comments pushed back hard on the methodology.
HN treated “AI cybersecurity is not proof of work” as a serious argument about search, model capability, and security asymmetry. The thread pushed past hype into a harder question: when an LLM flags a bug, did it understand the exploit path or just sample a suspicious pattern?
DeepMind CEO Demis Hassabis proposed a concrete AGI benchmark: train an AI with a knowledge cutoff of 1911, then see if it can independently derive general relativity as Einstein did in 1915. This test targets genuine scientific discovery rather than pattern matching.
Comments (0)
No comments yet. Be the first to comment!