December 28, 2025
As artificial intelligence (AI), in a form that engages users emotionally, is feared to be posing risks to humans, China's cyber regulator has issued draft rules for public comment to more strictly regulate AI services that simulate human personalities and engage users in emotional interaction.
Strengthening safety and ethical requirements, this move by China underscores its effort to shape the rapid rollout of consumer-facing AI.
The rules, currently proposals, would regulate AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other forms of media.
The draft outlines a regulatory approach that would require providers to warn users against excessive use and to intervene when users show signs of addiction.
Moreover, AI service providers in China would be required to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection.
The draft is also aimed at psychological risks. Providers would be expected to identify user states and assess users' emotions and their level of dependence on the service.
If users are found to exhibit extreme emotions or addictive behaviour, providers will have to take necessary measures to intervene, it said.
The measures set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity.