China Seeks Public Input on Draft Rules Governing Human-Like AI Chat and Emotional Companion Services

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 12/31/2025
In News

China’s cyberspace regulator has opened a public consultation on sweeping draft rules that would impose new obligations on artificial intelligence services designed to simulate human personalities and emotional interaction, marking the country’s most detailed attempt yet to regulate so-called anthropomorphic AI.

 

In a notice released on December 27, the Cyberspace Administration of China (CAC) said it is seeking public feedback on the “Interim Measures for the Administration of Humanized Interactive Services Based on Artificial Intelligence,” with comments accepted through January 25, 2026. The draft measures apply to AI products that engage users through text, images, audio, or video while simulating human traits, thinking patterns, or emotional responses.

 

The proposal frames regulation as a balance between innovation and control, encouraging the development of human-like AI services while introducing classified and risk-based supervision to prevent abuse, loss of control, and social harm. The CAC would oversee national coordination, while local cyberspace authorities and other government departments would share enforcement responsibilities.

 

Under the draft, providers would be barred from generating content that threatens national security, spreads misinformation, promotes crime, manipulates users emotionally, or encourages self-harm or addiction. Companies would be required to embed safeguards across the full lifecycle of AI services, including algorithm reviews, ethics assessments, data security controls, and emergency response mechanisms.

 

The measures place particular emphasis on protecting vulnerable groups. Providers would need to assess user emotional states, intervene when extreme dependence or distress is detected, and require human takeover in high-risk situations such as explicit threats of suicide. Dedicated “minor modes” would be mandatory, with guardian consent, usage limits, and real-time safety alerts. Special protections would also apply to elderly users, including emergency contact requirements and bans on simulating relatives or personal relationships.

 

Data governance is another central pillar. The draft restricts the use of interaction data for model training without explicit consent, mandates encryption and access controls, and grants users the right to delete interaction histories.

 

If adopted, the rules would significantly raise compliance expectations for AI chatbot and companion services operating in China, reinforcing the country’s broader push to align AI development with social stability, user safety, and state-defined ethical standards.



Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter