What if the person you’re talking to online isn’t real, and you don’t even know it? That’s the concern driving new rules in China.
Chinese regulators, led by the Cyberspace Administration of China, have proposed new laws to control “digital humans”, AI-generated people used in videos, chats, and online services. These rules aim to make the internet safer and more transparent, especially for children.
One major rule: all digital humans must be clearly labeled so users know they are not real. This is meant to prevent confusion and deception, which can become a serious problem as AI becomes more realistic.
The draft also includes strong protections for minors. It would ban addictive AI services for children and stop digital humans from offering “virtual relationships” to users under 18. Officials worry these features could harm mental health or create emotional dependency.
In addition, companies cannot use someone’s personal data without consent to create a digital human. The rules also block AI from spreading harmful content, including material that threatens national security or promotes violence and discrimination.
China is pushing hard to grow its AI industry, but at the same time, it wants to keep tight control over how the technology is used. These proposed rules are open for public feedback until May 2026.







