February 25, 2026
AI assistants can seem surprisingly human as they express joy, frustration, and even make jokes. This is explained by Anthropic, which states that this isn’t something developers deliberately program. It’s the default.
The leading American AI safety and research company that developed Claude posted a blog post on Monday, February 23, explaining why AI assistants’ mimics human behaviours.
The company unveils “persona selection model,” suggesting that human like behaviour emerges naturally from how AI systems are trained.
In the pretraining phase, AI systems predict what comes next by learning vast amounts of internet texts, news articles, forum conversations, and stories.
To accurately predict the texts, AI learn to stimulate human-like characters appearing in the text: real people, fictional characters, and even sci-fi robots.
Anthropic refers to these simulated characters as “personas.”
When a user interacts with an AI system, he/she does not talk to the system. Rather, it communicates with the character also known as the “assistant” persona in an AI-generated story.
Later, AI responses are further refined. Anthropic, however, quoted that this refinement happens within the space of existing human-like personas.
Anthropic recommends that AI developers should creative positive “AI role models” to overcome the concerning cultural baggage and align assistants with healthier archetypes.