Anthropic Introduces 'Persona Selection Model' Theory to Explain AI's Human-Like Behavior

Original: Anthropic Proposes 'Persona Selection Model' to Explain Why AI Seems Shockingly Human View original →

Read in other languages: 한국어日本語
AI Feb 24, 2026 By Insights AI (Twitter) 1 min read 5 views Source

Why Does AI Seem So Human?

On February 24, 2026, Anthropic published a new theoretical framework explaining why AI assistants like Claude can seem shockingly human—expressing joy or distress, and using anthropomorphic language to describe themselves.

The Persona Selection Model

The theory, called the Persona Selection Model, proposes that during training, language models learn a wide range of personas from the text they process—including fictional characters from literature, film, and other narrative sources. The model then learns to select the most contextually appropriate persona when generating responses.

Implications for AI Development

If true, the theory has concrete consequences for AI development: if AIs inherit traits from fictional role models, developers should give their models the best possible role models. This implies more deliberate curation of training data and closer attention to the values AI models internalize.

Anthropic acknowledges the model may not be a complete account of AI behavior, but believes it captures an important piece of the story—with an emphasis on the "story."

Share:

Related Articles

AI 6d ago 2 min read

Anthropic published a March 6, 2026 case study showing how Claude Opus 4.6 authored a working test exploit for Firefox vulnerability CVE-2026-2796. The company presents the result as an early warning about advancing model cyber capabilities, not as proof of reliable real-world offensive automation.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.