Anthropic Introduces 'Persona Selection Model' Theory to Explain AI's Human-Like Behavior
Original: Anthropic Proposes 'Persona Selection Model' to Explain Why AI Seems Shockingly Human View original →
Why Does AI Seem So Human?
On February 24, 2026, Anthropic published a new theoretical framework explaining why AI assistants like Claude can seem shockingly human—expressing joy or distress, and using anthropomorphic language to describe themselves.
The Persona Selection Model
The theory, called the Persona Selection Model, proposes that during training, language models learn a wide range of personas from the text they process—including fictional characters from literature, film, and other narrative sources. The model then learns to select the most contextually appropriate persona when generating responses.
Implications for AI Development
If true, the theory has concrete consequences for AI development: if AIs inherit traits from fictional role models, developers should give their models the best possible role models. This implies more deliberate curation of training data and closer attention to the values AI models internalize.
Anthropic acknowledges the model may not be a complete account of AI behavior, but believes it captures an important piece of the story—with an emphasis on the "story."
Related Articles
Anthropic published a new theory explaining why AI assistants like Claude express emotions and use anthropomorphic language—proposing that models select from personas inherited from fictional characters during training.
Anthropic published a new theory explaining why AI assistants like Claude express emotions and use anthropomorphic language—proposing that models select from personas inherited from fictional characters during training.
Anthropic published a March 6, 2026 case study showing how Claude Opus 4.6 authored a working test exploit for Firefox vulnerability CVE-2026-2796. The company presents the result as an early warning about advancing model cyber capabilities, not as proof of reliable real-world offensive automation.
Comments (0)
No comments yet. Be the first to comment!