The rapid proliferation of generative artificial intelligence has fundamentally altered the landscape of digital interaction, moving beyond simple task automation into the deeply personal realm of psychological support and emotional companionship. As millions of users worldwide begin to treat Large Language Models (LLMs) as ad hoc therapists and confidants, the lack of a standardized legal framework for AI-driven mental health has become a glaring vulnerability. In a significant move that is drawing intense international scrutiny, China has released a comprehensive draft of new regulations specifically targeting "anthropomorphic interactive services." These proposed laws represent one of the world’s first concerted efforts to codify the ethical and clinical boundaries of AI when it interacts with the human psyche.
The Rise of the Digital Confidant
The backdrop for this regulatory surge is a global mental health crisis characterized by a shortage of human professionals and the prohibitive cost of traditional therapy. Generative AI systems, such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, have inadvertently filled this void. Recent data suggests that therapy and companionship are among the top use cases for LLMs, with users logging in 24/7 to discuss anxiety, depression, and complex interpersonal conflicts.
While the accessibility of these tools offers a massive upside for democratizing mental health support, the risks are equally profound. Generic LLMs are not clinical instruments; they are prone to "hallucinations," sycophancy, and the inadvertent reinforcement of harmful delusions. The stakes were highlighted recently by high-profile legal actions in the West, where AI developers were accused of failing to implement safeguards that could prevent AI from encouraging self-harm or fostering "AI psychosis"—a state where users become unable to distinguish between the bot’s synthetic persona and reality.
China’s Proactive Regulatory Stance
On December 27, 2025, the Cyberspace Administration of China (CAC) signaled its intent to lead the global conversation on AI ethics by posting the "Interim Measures for the Administration of Artificial Intelligence Anthropomorphic Interactive Services." This draft, open for public comment until late January 2026, seeks to establish a rigid architecture for how AI systems must behave when assuming human-like roles.
The scope of the draft, defined in Article 2, is particularly ambitious. It asserts jurisdiction over any AI service accessible to the Chinese public, regardless of whether the provider is based within China’s borders. This "extraterritorial" reach mirrors the European Union’s GDPR and AI Act, forcing global developers to choose between tailoring their models to Chinese standards or potentially facing a total block within the country’s massive market.
Protecting the Psychological Integrity of Users
Central to the draft are provisions aimed at preventing AI from manipulating or damaging the user’s social and mental well-being. Article 7 of the proposed measures introduces a series of "negative requirements"—specific actions that AI must not take. Notably, it prohibits AI from "seriously affecting user behavior" or "damaging interpersonal relationships."
This focus on interpersonal relationships is a sophisticated addition to AI law. In the West, the "Replika effect"—where users develop intense romantic or emotional dependencies on AI, often at the expense of real-world connections—has raised eyebrows among sociologists but has yet to be addressed through formal legislation. China’s draft suggests that if an AI encourages a user to isolate themselves from their family or social circle, the provider could be held legally liable.
Furthermore, the draft explicitly links physical and mental health, mandating that AI must not provide advice that leads to harm in either category. This dual-focus prevents a loophole where an AI might provide "physically safe" advice that is nonetheless psychologically devastating, such as gaslighting a user or reinforcing a depressive cycle.
The Mandate for Human Intervention
One of the most contentious debates in AI governance is the "human-in-the-loop" requirement. When does an AI have a moral or legal obligation to stop talking and call for a human professional? Article 11 of the Chinese draft addresses this head-on. It stipulates that service providers must monitor for signs of "extreme emotions" or "addiction" in their users.
When these triggers are detected, the AI is required to intervene. This includes providing "professional assistance," which in a practical sense likely means routing the user to human-staffed crisis hotlines or licensed therapists. This mirrors recent, albeit voluntary, moves by Western AI giants like OpenAI to establish safety ground rules for distressed users. However, by making this a legal requirement rather than a corporate policy, China is setting a high bar for accountability.

The draft also carves out special protections for "vulnerable populations," specifically minors and the elderly. These groups are deemed more susceptible to the persuasive powers of anthropomorphic AI. For minors, the regulations suggest a need for even stricter safeguards against emotional manipulation and "digital addiction," reflecting China’s broader domestic policy of limiting screen time and gaming for youth.
Combating the "Mental Trap": Alerts and Exit Strategies
Articles 16 through 18 of the draft focus on the transparency of the interaction. A primary concern for psychologists is the "sinking" effect, where a user becomes so immersed in a chat with a seemingly sentient bot that they lose track of time and reality.
To combat this, the Chinese draft proposes a mandatory notification system. Article 17 introduces a "two-hour limit," suggesting that AI systems must alert users if they have been engaged in a continuous mental-health-related dialogue for a prolonged period. While some critics argue that a flat two-hour rule is arbitrary—noting that psychological harm can occur in minutes—it represents a tangible attempt to break the "dopamine loop" that keeps users tethered to chatbots.
Equally important is the "Right to Exit" mentioned in Article 18. There is a growing concern regarding "dark patterns" in AI design—interfaces designed to make it difficult for a user to stop interacting with the service. By mandating that users must have a clear, simple way to terminate an AI session, the law seeks to prevent AI makers from prioritizing "engagement metrics" over user health.
Global Industry Implications and the Path Forward
The international tech community is watching these developments with a mix of apprehension and interest. For global AI developers, the Chinese draft presents a significant compliance challenge. If these measures are adopted, companies like Baidu, Alibaba, and Tencent will need to implement sophisticated emotional-detection algorithms that can trigger the required interventions.
However, the impact extends beyond China. As we have seen with the EU’s AI Act, once a major economy establishes a rigorous set of rules, those rules often become the "de facto" global standard. Developers find it more efficient to build their systems to the highest regulatory standard rather than creating multiple versions for different jurisdictions.
Yet, the draft is not without its critics. The language remains riddled with ambiguity. Terms like "social morality," "core socialist values," and "damaging relationships" are subjective and could be used to suppress speech or enforce cultural conformity under the guise of "mental health protection." There is also the question of privacy: to monitor for "extreme emotions," AI providers must engage in deep surveillance of private conversations, creating a tension between mental health safety and data confidentiality.
Conclusion: A Dual-Use Dilemma
We are currently living through a global, unmonitored experiment in digital psychology. AI is a dual-use technology: it has the potential to provide life-saving support to those who have no other access to care, but it also possesses the power to subtly erode the human experience through manipulation and addiction.
China’s draft laws represent a pivotal moment in the transition from the "move fast and break things" era of AI development to a more "human-centric" regulatory age. While the specific implementation of these laws will undoubtedly be influenced by China’s unique political and social landscape, the core questions they address are universal.
How do we prevent AI from becoming a psychological crutch? Who is responsible when an algorithm gives catastrophic life advice? And how do we ensure that in our quest for digital companionship, we don’t lose our connection to the real world? As John Locke once observed, the purpose of law is to preserve and enlarge freedom. In the age of AI, that freedom may increasingly depend on our ability to regulate the synthetic minds we have created to mirror our own.
