According to China's new proposed regulations, companies providing generative AI services must prevent false information and content that is discriminatory or harmful to intellectual property or personal privacy rights.
According to newly proposed Cyberspace Administration of China (CAC) rules, service providers must safeguard the propriety of content created by their generative AI platforms. Generative AI products must also satisfy a security assessment by the nation’s internet watchdog before public accessibility. The new draft rules, targeting ChatGPT and similar artificial intelligence tools, were unveiled on April 11.
According to the CAC’s proposed regulation, companies providing generative AI services must prevent false information and content that is discriminatory or harmful to intellectual property or personal privacy rights. Firms should also ensure that their products uphold Chinese socialist values and do not generate content advocating violence, subversion, pornography or that disrupt economic or social order.
A 2018 regulation covering online information services that can influence public opinion mandates that generative AI products satisfy a CAC security assessment before being accessible to the public, the nation’s internet regulator said.
ChatGPT is officially unavailable in China. However, domestic players are speedily working on launching similar technologies.
Beijing is cautious of the risks posed by the rapid domestic interest and developments in AI. Some state-run media outlets have warned of a “market bubble” and “excessive hype” about AI, with one daily suggesting that ChatGPT may corrupt users’ moral judgment, the SCMP reported in mid-April.
On April 10, the Payment & Clearing Association of China—overseen by the country’s central bank—urged workers to be vigilant about the risks these services pose at work, and to avoid uploading sensitive and confidential information to them.
The CAC is currently soliciting feedback regarding its proposed rules until May 10.
Beijing’s actions come as governments globally aim to slow and moderate rapid developments in the field. Other states are also pondering regulations to rein in generative AI.
For example, the EU is debating the Artificial Intelligence Act. It was first proposed in 2021 to govern AI platforms based on their relative risk. Yet, the EU’s approach differs from China’s draft rules and may impose stiff compliance burdens on firms, if the implementation of the General Data Protection Regulation is any guide. Beijing’s draft rules mainly seem concerned with content moderation.