MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Font ResizerAa
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Reading: China’s Initiative to Regulate Anthropomorphic AI
Share
Font ResizerAa
MarketAlert – Real-Time Market & Crypto News, Analysis & AlertsMarketAlert – Real-Time Market & Crypto News, Analysis & Alerts
Search
  • Crypto News
    • Altcoins
    • Bitcoin
    • Blockchain
    • DeFi
    • Ethereum
    • NFTs
    • Press Releases
    • Latest News
  • Blockchain Technology
    • Blockchain Developments
    • Blockchain Security
    • Layer 2 Solutions
    • Smart Contracts
  • Interviews
    • Crypto Investor Interviews
    • Developer Interviews
    • Founder Interviews
    • Industry Leader Insights
  • Regulations & Policies
    • Country-Specific Regulations
    • Crypto Taxation
    • Global Regulations
    • Government Policies
  • Learn
    • Crypto for Beginners
    • DeFi Guides
    • NFT Guides
    • Staking Guides
    • Trading Strategies
  • Research & Analysis
    • Blockchain Research
    • Coin Research
    • DeFi Research
    • Market Analysis
    • Regulation Reports
Have an existing account? Sign In
Follow US
© Market Alert News. All Rights Reserved.
  • bitcoinBitcoin(BTC)$77,636.00-0.55%
  • ethereumEthereum(ETH)$2,316.14-0.69%
  • tetherTether(USDT)$1.000.00%
  • rippleXRP(XRP)$1.41-0.89%
  • binancecoinBNB(BNB)$626.54-0.71%
  • usd-coinUSDC(USDC)$1.000.01%
  • solanaSolana(SOL)$85.34-1.42%
  • tronTRON(TRX)$0.3237290.01%
  • Figure HelocFigure Heloc(FIGR_HELOC)$1.020.00%
  • dogecoinDogecoin(DOGE)$0.097916-0.52%
Interviews

China’s Initiative to Regulate Anthropomorphic AI

Last updated: March 4, 2026 2:50 am
Published: 2 months ago
Share

People are now increasingly feeling the impact of AI and the changes it is bringing to our work and daily life. Governments around the world are considering the appropriate regulatory approaches, especially with respect to the concerning aspect of AI, and China is no exception.

Anthropomorphic or companion AI is drawing the attention of many regulators. A number of widely discussed cases show that prolonged interaction with “virtual humans” created by AI may lead to a decline in users’ real-world social skills and even blur the ethical boundaries between reality and virtual environments. Studies warn that this blurring may give rise to ethical risks such as pathological emotional attachment, social isolation, and privacy infringements.

For example, a teenager in China became addicted to an AI chatbot, and under the influence of its suggestive conversations, the teenager engaged in extreme behaviors that harmed herself. Similar cases have also been found in other countries.

Against the backdrop of increasingly sophisticated AIGC technologies and more refined algorithmic governance rules, China has introduced its first regulation specially targeting anthropomorphic interactive services. The Interim Measures for the Management of Anthropomorphic AI Interactive Services (Exposure Draft) (the “Draft Interim Measures”) was released on December 27, 2025, which targets products or services that utilize artificial intelligence technology to provide the public within the territory of the People’s Republic of China with simulated human personality traits, thought patterns, and communication styles, enabling emotional interaction with humans through text, images, audio, video, and other means (“Anthropomorphic Interactive Services”). In practice, product forms such as emotional companionship, AI companions, and role-playing dialogue that are available to the public fall within the regulatory scope of the Draft Interim Measures.

According to Article 21 of the Draft Interim Measures, if any of the following circumstances is satisfied, the providers of which shall conduct security assessments and submit assessment reports to the provincial-level cyberspace administration department with jurisdiction:

Furthermore, providers of Anthropomorphic Interactive Services (“Providers”) shall also fulfill the filing obligations under the Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services.

According to Article 10 of the Draft Interim Measures, when conducting data processing activities such as pre-training and optimization training, the Providers are required to strengthen the management of training data and comply with the following requirements:

For users’ interaction data or sensitive personal information, unless otherwise provided by laws or administrative regulations, or unless the user has given separate consent, the Providers shall not use such data for model training.

Article 12 of the Draft Interim Measures specially addresses the protection of minors. It requires the Providers to establish a dedicated minor mode, offering users personalized safety settings such as minor mode switching, periodic reality reminders, and usage time limits.

The involvement of guardians is also emphasized. When providing emotional companionship services to minors, the Providers shall obtain explicit consent from guardians. The Providers shall offer guardian control functions to receive real-time safety risk alerts, review summarized usage information, block specific characters, limit usage duration, prevent in-app purchases, etc. In addition, when collecting data under the minor mode and providing it to third parties, separate consent from the guardian shall also be obtained. Guardians can also request the Providers to delete the minor’s historical interaction data.

Moreover, the Providers shall possess the capability to identify minors. When a user is identified as a suspected minor, while ensuring the protection of personal privacy, the system shall automatically switch to the minor mode with an appeal channel provided.

Another noteworthy requirement is that, with respect to personal information of the minors, the Providers shall conduct annual compliance audits — either independently or through entrusted professional institutions — to verify their adherence to laws and administrative regulations when processing minors’ personal information.

Article 13 of the Draft Interim Measures sets up a framework for providing special protection to the elderly users. The Providers shall guide seniors to designate emergency contacts. Should any situation endangering the user’s life, health, or property arise during use of service, the Providers shall promptly notify the emergency contact and offer access to social-psychological support or emergency assistance channels.

Furthermore, the Providers shall not offer services that simulate interactions with the elderly user’s relatives or specific acquaintances.

Articles 16 and 17 of the Draft Interim Measures establish requirements relating to interactive transparency. The Providers shall prominently notify users that they are interacting with AI rather than a natural person. When the Providers detect signs of excessive reliance or addictive tendencies in users, or upon initial use or re-login, they shall dynamically alert users via pop-ups or similar methods that the interaction content is AI-generated. Similar reminders are also required where users continuously use Anthropomorphic Interactive Services for over two hours.

Article 11 of the Draft Interim Measures establishes an intervention and response mechanism for users exhibiting abnormal emotional states. The Providers shall possess the capability to identify user status. When detecting extreme emotional states or signs of addiction, the Providers shall take necessary measures to intervene. Similarly, when identifying high-risk tendencies involving threats to users’ life, health, or property safety, the Providers shall promptly provide reassurance, encourage users to seek assistance, and offer professional assistance channels.

The Providers are also required to establish emergency response mechanisms. Where users explicitly express intent to commit suicide, self-harm, or other extreme scenarios, human operators shall take over the conversation and promptly contact the user’s guardian or emergency contacts. For minors and elderly users, the Providers shall collect the guardian and emergency contact details at the registration stage.

a) Right of consent. Under Article 14 of the Draft Interim Measures, except as otherwise provided by law or with the explicit consent of the rights holder, user interaction data shall not be provided to third parties.

b) Right of deletion. Under Article 14 of the Draft Interim Measures, the Providers shall offer users the option to delete interaction data, enabling users to remove historical interaction data, such as chat records.

c) Right to exit. Under Article 18 of the Draft Interim Measures, when providing emotional companionship services, the Providers shall offer convenient exit options and shall not obstruct users from voluntarily terminating the service. Upon receiving a user’s request to exit via buttons, keywords, or other methods within the human-machine interface or window, the service shall be promptly discontinued.

d) Right to complain. Under Article 20 of the Draft Interim Measures, the Providers shall establish effective complaint and reporting mechanisms, set up convenient channels for submitting complaints and reports, publish processing procedures and response timelines, and promptly accept, address, and provide feedback on the outcomes of such complaints.

According to Article 9 of the Draft Interim Measures, the Providers shall fulfill security responsibilities throughout the entire lifecycle of Anthropomorphic Interactive Services, clearly defining security requirements for each phase, including design, operation, upgrades, and termination. Security measures shall be designed and implemented concurrently with service functionality to enhance inherent security levels. Security monitoring and risk assessment should be strengthened during operation, and the Provider shall promptly detect and correct system deviations, address security issues, and retain network logs in accordance with the laws and regulations.

Where a user poses a significant security risk, the Providers shall take remedial measures such as restricting functionality, suspending, or terminating service to that user, retaining relevant records, and reporting to the competent authorities as stipulated in Article 23 of the Draft Interim Measures.

Besides the Providers, Article 24 of the Draft Interim Measures also imposes the obligation of compliance assurance on the application platform. Application platforms such as internet app stores shall implement security management responsibilities, including routine review of App listings and emergency response procedures. They shall verify the security assessments and filing status of applications providing Anthropomorphic Interactive Services. For violations of relevant national regulations, they shall promptly take measures such as refusing listing, issuing warnings, suspending services, or removing listings.

Articles 25 to 29 of the Draft Interim Measures stipulate provisions concerning legal liability and penalties for violations of the Draft Interim Measures.

In general, where a Provider violates the provisions of the Draft Interim Measures, the competent authorities shall impose penalties in accordance with the provisions of laws and administrative regulations. Where no such provisions exist in laws or administrative regulations, the competent authorities shall, within their respective jurisdictions, issue warnings or public reprimands and order rectification within a specified time limit. Where the Provider refuses to rectify or where the circumstances are serious, the competent authorities shall order the suspension of relevant services.

For specific violations, where a security assessment has not been conducted in accordance with the Draft Interim Measures, the Provider shall be ordered by the provincial-level cyberspace administration department with jurisdiction to conduct a reassessment within a specified timeframe. Where deemed necessary, on-site inspections and audits shall be conducted on the provider.

Where provincial-level or higher cyberspace administration departments and relevant competent authorities discover significant security risks in Anthropomorphic Interactive Services or the occurrence of security incidents, they can, in accordance with prescribed authority and procedures, conduct interviews with the legal representative or principal responsible person of the provider. The provider shall take measures as required to rectify the situation and eliminate potential hazards.

Similar to China’s Draft Interim Measures, some of the states of the United States such as California and New York, have also introduced regulatory measures for anthropomorphic/companion AI, i.e., California’s Companion Chatbot Law (the “CCC Law”) and New York’s Artificial Intelligence (AI) Companion Models Law.

The CCC Law, which took into force on 1 January 2026, targets companion chatbot, defined as an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. The “Operator”, defined as any person who makes a companion chatbot platform available to users in the state, shall bear obligations concerning the protection of the minors, interactive transparency, prevention of suicide, and so on.

Where the Operator knows that a user is a minor, it shall

With respect to interactive transparency, if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, the Operator shall issue a clear and conspicuous notification indicating that the companion chatbot is AI-generated and not human.

To prevent suicide, the Operator shall maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including but not limited to, by providing a notification to the user that refers the user to crisis service providers, such as a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. Moreover, beginning July 1, 2027, the Operator shall annually report to the Office of Suicide Prevention items related to suicide prevention, such as protocols put in place to detect, remove, and respond to instances of suicidal ideation by users.

As AI technology continues to advance, a balanced regulatory regime would need to be in place to harness the advantage of AI and minimize its disadvantage. The regulation of anthropomorphic and companion AI is one of such challenges in the AI age.

Read more on Lexology

This news is powered by Lexology Lexology

Share this:

  • Share on X (Opens in new window) X
  • Share on Facebook (Opens in new window) Facebook

Like this:

Like Loading...

Related

Record-setting Vincent Kompany prepares to lead Bayern Munich Sunday Oktoberfest outing – Get German Football News
US House committee issues subpoenas for Epstein files and Clinton depositions
The Latest: Mexico expels 26 high-ranking cartel figures to US in deal with Trump administration
Western Cape’s Getting You to Work programme paves the way for jobseekers
49ers cornerback upset with Dolphins after Robert Saleh interview

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
Previous Article JUST IN: Howard Lutnick Agrees to House Epstein Deposition
Next Article A glimpse of Iran, through the eyes of its artists and journalists
© Market Alert News. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Prove your humanity


Lost your password?

%d