Geoffrey Ding, an assistant professor of political science at George Washington University, said Chinese regulators likely learned from the EU’s AI law. “Chinese policymakers and academics have said they have looked to EU law for inspiration in the past.”
But at the same time, some measures taken by Chinese regulators cannot be replicated in other countries. For example, the Chinese government requires social platforms to screen user-uploaded content for AI. “This seems very new, but it may be unique to the Chinese context,” Ding said. “This would never exist in the U.S. context, because the U.S. is notorious for saying platforms are not responsible for content.”
But what about freedom of expression online?
The draft regulation on labeling AI content is seeking public feedback until October 14 and could take several more months to be amended and passed. However, there is little reason for Chinese companies to delay preparations when it takes effect.
Sima Huapeng, founder and CEO of Silicon Intelligence, a Chinese AIGC company that uses deepfake technology to generate AI agents and influencers to replicate the living and the dead, said the company said that its products now allow users to voluntarily choose whether or not to mark the products they produce as AI. But if this law passes, it may need to be made mandatory.
“If a feature is optional, companies are less likely to add it to their product. But if it’s required by law, everyone has to do it.” Mr. Shima says. Adding watermarks and metadata labels is not technically difficult, but it does increase operational costs for compliant companies.
He said such policies can prevent AI from being used for fraud and privacy violations, but they also lead to the growth of a black market for AI services as companies seek to avoid compliance and save costs. There is a possibility.
There is also a fine line between holding AI content creators accountable and policing individual speech through more sophisticated tracking.
“The fundamental human rights challenge is to ensure that these approaches do not further infringe on privacy and freedom of expression,” Gregory said. Implicit labels and watermarks can be used to identify sources of misinformation or inappropriate content, but the same tools also give platforms and governments greater control over what users post on the internet. It will look like this. Indeed, concerns about how AI tools can be misused are one of the main drivers of China’s aggressive AI legislative efforts.
At the same time, China’s AI industry is already lagging behind its Western peers, pushing back against the government to give it more room to experiment and grow. China’s earlier Generative AI Law was significantly watered down between the first exposure draft and the final bill, removing identity verification requirements and reducing penalties for companies.
“What we’ve seen is that the Chinese government is really trying to walk a tightrope between ‘making sure to maintain content control’ and ‘giving freedom to innovate to AI labs in strategic spaces.’ “There is,” Ding said. “This is another attempt to do that.”